I'm a highly experienced senior software engineer with 16 years of investment banking experience (and 20 years in the IT industry) working for various Tier 1 banks in both the City and Canary Wharf. Over the past 7 years I have specialised in the area of Fixed Income, Credit and Market Risk, working on low latency high volume trade processing, core technology, regulatory reporting, timeseries analysis, credit submission and workflow.
I'm highly delivery focused and a natural problem solver, and I bring both a deep technical understanding of the technologies I use and also an enthusiasm to get the job done. I'm also an experienced technical mentor and greatly enjoy helping others understand how best to use technology.
I have a degree in computer science (2.1) from Portsmouth University and a first in Object Oriented Design.
Quality software design is absolutely vital to producing applications that fulfil business objectives and I'm a firm believer in quality architectural and technical design being the key to becoming a successful software engineer. I'm also passionate about code being well designed, maintainable and fit for purpose and put great emphasis on best practice design, readability and comprehensive testing.
Outside of work I have varied interests. I'm a keen runner, competing in ultra distance races which demonstrate my focus and commitment to achieve a goal. My long team goal being to run 100 miles in under 24 hours. Some would say insane and I would struggle to disagree at times. To balance all that running I also enjoy playing computer games and building PCs. I always have some kind of side-project going on which is usually focused on learning some new technology. The rest of my time is filled with all the things that a married man with 3 gorgeous young children demand.
Often when people work outside the office they are described as "working from home". Great emphasis is placed on the quotes. People cannot possibly be productive at home, there are too many distractions! How can you tell that they actually working and not watching daytime TV?
Well, I totally disagree with that. I don't "work from home" at all, I am a remote worker. I have no distractions at all at home and I focus totally on the job in hand. I don't have to waste countless hours a week sitting in a train or a car. I work whatever hours it takes to get the job done. And that's how I think you judge remote worker; are they delivering the results that you expect.
I feel I'm most productive working from my well-equipped home office. Distractions are few, concentration level maximized and I can get work done when otherwise I might be sitting in a train or a car. I'm happy to attend meetings at a client site when required but the majority of the time I'd rather spend adding value to the projects I'm working on.
Fixed Income Trade Processing (FITP) are responsible for a trade processing platform which is built on a modified Gemfire in-memory cache. The platform infrastructure is underpinned by a Data Management Layer (DML) which provides an abstraction to the cache, enriched to support for high-volume, transactional capabilities. Business services are then plugged in to the platform to deliver the business functionality required for a trading platform.
My role on the team has been to extend the business functionality to meet the requirements of the business. In particular I have been responsible for the development and delivery of the Client Clearing functionality which has allowed Nomura to provide addition services to it's clients and meet regulatory demands.
I have spent considerable time using Apache Camel to implement Enterprise Integration Patterns which feed to external systems (via XML and JSON) and provide workflow within the application. Camel is a powerful framework to use in a complex environment when data needs to be processed in different ways and marshalled from one system to another, or even within the same system.
One of the core principles of the development team is TDD which I have been more than happy to focus on. Comprehensive unit testing and effective integration tests allow development of the platform without disrupting critical business services due to errors.
The role has also included in depth analysis of multithreading issues, garbage collection and memory usage using YourKit. This meant constructing a scripted test client which is used to push the application in a defined way using a scenario. This allowed repeatable tests to be conducted and the calculation of performance metrics. As issues were identified I then had to implement fixes to address the issue.
Fixed Income Development Core IT (FID Core IT) are responsible for a set of products which are used by other development teams within the bank. These applications provide services including deployment, monitoring and entitlement control. My role as a senior developer within FID Core IT is to take responsibility for enhancing this suite of applications and provide technical leadership in moving those technologies forward.
Each of the applications provide a different set of challenges from web front ends to core frameworks that require a detailed knowledge of multi-threading. This a diverse environment to work in and requires focus and attention to detail. For example, I was asked to perform a detailed analysis of the current state of Java web frameworks with a view to making a recommendation as to which to use for new development. After extensive analysis the Play Framework was chosen and the next task was to integrate it with various existing components and technologies already used. Key aspects to this integration have been; proper Maven and Nexus repository support and SSO security. Going forward my role as the domain expert on the usage of this technology will mean providing mentoring and training to other developers.
Another key aspect of the role is to form close working relationships with the various development teams to ensure that their requirements are met. This takes excellent communication skills and the ability to understand what's really required rather than what might be initially asked for. The goal is always to evolve solutions that satisfy the needs of these teams each with their unique challenges and priorities.
MARS is a multi-asset risk pricing system that publishes risk analytics to down-stream systems which use those figures to calculate P&L for the trading desks. The main function of MARS is to take market and trade data, format that into a message format, pass that into an analytics engine and then publish the measures to down-stream systems. Upstream and downstream the data is passed from and to Gemfire which is a highly available distributed cache, and all processing takes place on a Symphony GRID. The application currently support Loans, Bonds and CDS flow products and I've enjoyed getting exposure to this new business area.
My role as a core Java developer on the MARS project has been to adapt and modify the application to meet the changing demands of the business. That has involved learning an extremely large and complex codebase. A great emphasis is placed on testing as the impact of miscalculated figures is high.
Communication skills have played a large part in this role as changes to MARS are never made in isolation. Upstream, downstream and analytics teams all have to be coordinated to ensure that once released the systems function correctly together. The project is organised on agile principles with daily scrums, monthly releases and a focus on responding to business needs as fast as possible.
In addition to working on the MARS application I have recently been moved on to the MARS Real-Time project which is a greenfield application still in the prototype stage. My role has been to prototype new features and help design a new architecture for a real-time pricing engine.
In my second contract with Credit Suisse I worked within the Credit Risk IT department on an component of the INSIGHT suite of applications. INSIGHT is a Credit Risk Management application focused on all aspects of credit risk from T-0 pre-trade deals through to monthly regulatory risk reports. The aspect that I was responsible for was the Credit Risk Reporting application known as CRIS which provides the Risk Analytics users with a data warehouse on which they produce regulatory reports.
The role was two-fold. Firstly I was responsible for the full development lifecycle of new versions of the application. This entailed talking to the business users, analysing their requirements and producing functional requirements using tools such as mind-maps, use cases and activity diagrams (using Enterprise Architect). Writing technical specifications, producing very details prototypes using Axure RP, designing the technical architecture and implementing the next version of the application.
I was responsible for producing the analysis and design for 3 new versions (v5, v6 and v7) of the application which were scheduled for release over the next 18 months. Those designs had to be of the highest quality as versions v6 and v7 were due to be implemented by an out-sourced off-shore team.
The second element of my role covers the production support aspects of the the current live application. Probably 10% of my time was spent analysing issues in production and ensuring that timely fixes are applied to resolve them. This also meant working on some scheduled change requests that have really helped to give me exposure to the large CRIS legacy code base.
Another part of my role was to provide training and technical mentoring to off-shore teams. This has been a common part of my work for the past few years in different contracts and has helped to fully integrate those teams into the project team as a whole.
My current obsession away from work is PC modding and gaming. I'm having fun building watercooled PCs, overclocking CPUs to within an inch of their lives and then playing games that push the rig to the limit. I can't decide which I enjoy more, the building or the gaming; both are fun.
Over the past 10 years I've been a very active runner and event organiser. I train hard and like to keep very fit. Weekly mileage often exceeds 50 miles and somehow I manage to fit that in given my busy schedule. Luckily I don't need a great deal of sleep so getting up early and running is my normal plan. The point of the training is to allow me to race ultra-distance events; my current favourite distance is 100km which seems an awfully long way to most people. 2014 is going to see me try the 100 mile distance though which I'm hoping to complete in under 24 hours. Honestly though I'd be happy just to finish. On top of my training and racing I'm also an event organiser. I have an ongoing involvement with the Halstead & Essex Marathon as the entries secretary. I've seen that event grow over the past 10 years to one of the top-rating marathons in the country.
Since it's inception I've been really interested in node.js. Javascript is a departure from the Java that I write at work and it makes a nice change. I've done a lot of Javascript on websites for different banks so I've been familiar with the language for a long time. When node.js was conceived it intrigued me and I've been a bit of a fan ever since. I've written an awful lot of little projects in node.js, some of which I've open-sourced on GitHub. I'd love the opportunity to write node.js commercially but at the moment Java pays the bills.
Having tests and ensuring they run and pass is important. Very important. Whilst you're pushing new features in you have to know that existing functionality works. Testing, and in particular testing first is an obsession of mine and one I encourage in others.
My default workflow when adding features to a system is as follows:
It's no good having great test coverage and not run those tests. You can't leave it up to the devs to run every test before checking in, the most I think you can require is that the devs run the tests that they believe will be affected by their change. Continuous integration is the act of running all those tests automatically on every code in. It's also the process of building those changes into a working deployable artifact.
I see continuous integration as a phased process that occurs every time a dev checks in a change:
Should any of those stages fail then that change has broken the build and the dev responsible should get informed of that and should prioritise fixing it. Having a working build is critical to ensuring that incremental changes all result in a working system.
After 18 years of working in investment banks I've worked in plenty of business areas; Trade Processing, Fixed Income, Flow, Credit & Market Risk, Reference Data. I'm not going to profess to be an expert in any of these areas but I am generally familiar with them. I'm primarily a technologist and I've focused less of the acquisition of business knowledge but rather on honing my skills as a software engineer. I do enjoy working in finance though and welcome the opportunity to learn new business areas.
I like to think that over the past 20 years I've got rather good at it. I take pride in producing high quality, well tested, functional code that is easy to maintain and add features to.
I'm also a pragmatist and I try to balance my desire to create a beautiful code with actually getting the job done. A good example would be if I were adding a new feature to some code that had a coding style that I didn't particularly like. I'm not one to start converting it to how I like it. Sure, if I have to comprehensively refactor the class then I will do so to my own style but if I'm adding a feature I'll adopt the style that the code is written in. Having two styles in one class is just ugly and I'd rather see consistency.
Every codebase has areas that you honestly don't want to touch and that's dangerous. Understanding your application and being able to change it is critical. There is a solution though; Step 1: Get decent test coverage. Step 2: Refactor. I'm very good at both.
I've worked on system were it's been almost impossible to add new functionality because the engineers who built it have left and no one really understands how it works. Eventually of course something does need to change and then it's time to try and get it under test. Rarely is it possibly to spend the time developing a full set of tests, often it's difficult to tell what it's even doing. The trick is to isolate the parts that need to be changed and get those parts under test. Often it's possible to find seams in the existing code where you can can extract code into seperate classes. Then the job of testing gets easier because you can test that part in isolation. Finding seams is a skill though and knowing whether the extraction of that code is safe is a matter of experience.
My definition of legacy code is simply code that hasn't got appropriate test coverage. Put the tests in and it's not legacy code any more, it's code that can be modified, flexed, and enhanced. Let me help you turn your codebase into place that you don't fear to tread.
Does processing time matter to you? In most applications it does and I'm experienced in finding bottlenecks and eradicating them. You have to design for performance. I can help with that too.
Systems are built from both new and old applications, integrating them reliably can be hard. One of the key aspects of my current role with Nomura has been learning and using Apache Camel. If you're not familiar with Apache Camel, it's a framework which allows you to implement enterprise integration patterns. It's configuration based and treats interactions between system as interactions which form routes. I wont go into a ton of details here as there's lots of information on the internet about it.
One short example though; say to want to create a process that watched a directory for a file and when it is created, read it, write a new file and then FTP it over to a file share on a different machine. That's a totally common requirement and one that the implementation in pure Java would have a surprising number of intricacies. IO issues, directory monitoring, charsets, stream handling, the list goes on. What Apache Camel provides is some building blocks that allow you to define this process in just a few lines of configuration. Exception handling is handled and you get to get on with thinking about adding value to the project rather than debugging low-level Java IO implementations.
One of the focuses on my blog is going to be Apache Camel so look out for more posts about different aspects of this excellent framework.
Software engineering isn't just coding, there are many other aspects and an important one is the abilityto take a problem and find out what is actually needed. Say for instance that a user requests that a new button is added to a UI. You might take that at face-value and just add it in. Then you find out that the new button forms part of a new business workflow and it affects some of the other functions of the system. OK, more changes required and you put those in. Then the user thinks some more and decides the button needs to do something slightly different... This is not software engineering. This is just hacking. Fine for a little prototyping but it's too iterative and you end up wasting a lot of time. This is amplified when you are also having to code up tests.
A better approach would have been to actually talk to the user about what that button did. Why do they need it. What new business logic is actually required. That's analysis. Armed with knowledge about what the business are trying to acheive you design something that fulfils it. That may or may not be the button they requested initially, it could be something thoroughly different that the user never thought of.
I don't advocate the creation of huge specifications up front before coding a feature. I like to work in an iterative environment but one in which I've spent the time and effort understanding what is actually required.
I'm not an advocate of staying on the bleeding edge, but you have to know how far from it you are. Are your third-party libraries up to date? Are you using libraries that are no longer supported? Is there something better available now? I can help answer these question and dig you out of technical debt.
Hiring is hard. There are many ways to judge whether a candidate is right for a role, one of critical ones is "Do they know there stuff?" I've conducted a large amounts of interviews and I can spot a lemon from a diamond. It's not all about asking questions though, pair programming in an interview can let you know whether the candidate can communicate and deal with problems.
When you need to know whether a developer knows his Java you need a technical test so that you can see the depth of knowledge and also allow you to compare one interviewee from another. Most companies have their own internal Java technical test but I wanted something that I could use anywhere. That led me to create Javonical - the canonical Java technical test. It's on GitHub and it's open-source so if you want to use it go ahead and if you want to change it then just ping me a pull request.
I like to think I know where the pitfalls are in multithreaded code. It's an area that people tend to get pretty rusty at and it helps to adopt some practices that mitigate the issues. Is it thread-safe? I'll help you find out if it is or not.
One of the most rewarding aspects of my career has been helping other developers to improve their skills and gain in knowledge. In many of my roles I've worked with recent graduates and junior developers that are still in the process of learning the software engineering trade.
I can help developers to take a step back from their day to day work and look at aspects of software engineering which can improve the quality of their code and increase their productivity. That might be how to design an immutable class, or perhaps how to use their IDE more effectively. Unix and Windows tricks and tips, class structuring, refactoring, the list is endless. I really enjoy helping others to improve.