+1 (425) 864-2035 | ajay.naik44@gmail.com | LinkedIn | GitHub
Java Developer | Backend Developer | Software Engineer | Full Stack Developer
11+ years as a J2EE & Big Data architect delivering scalable solutions, from design to deployment.
Created Java-based test automation framework for student reports, enhancing grading efficiency and frontend development with JavaScript, including comprehensive logging.
Aug 2021 – Sep 2022
Modernized legacy application by transitioning from a monolithic to a microservices architecture using Docker and Kubernetes, resulting in a 20% reduction in development, maintenance, and server costs.
Developed a new module from scratch using Ruby on Rails for digital signature customers, meeting requirements and significantly increasing development speed, enabling new APIs to be created.
Automated build and deployment processes with a Jenkins pipeline, reducing lead time from 8 hours to 1 hour and enhancing development velocity. Implemented GitOps principles for continuous integration and deployment.
Enhanced data security by implementing a GDPR-compliant design to address plaintext storage of user-sensitive data.
Improved API accessibility by implementing version-agnostic backward and forward compatibility for REST services, eliminating client-side version adaptation issues.
Apr 2020 – Apr 2021
Implemented GDPR compliance for PayPal's large Hive data warehouse by developing a Pyspark job to mask specific primary keys. This job efficiently processes 2 million records in 30 minutes, despite Hive 1.1's limitations.
Streamlined data ingestion by building a robust data pipeline using Spark for processing, Hive for storage, and Kafka for real-time intake, meeting client requirements.
Enhanced data processing by creating a Java application that listens to Kafka topics, parses XML files, extracts data, and stores it in Hive tables, efficiently handling batches of 50,000 records.
Developed a RESTful API for PayPal Honey Entity using Spring and Java, facilitating efficient data access and management.
Dec 2017 – Apr 2020
Transformed the learning module processing with a high-performance Kafka pipeline, converting a serial job system into a parallel Kafka-powered pipeline. This boosted throughput by 10x, processing 1 million records in 1 hour instead of 1 hour per job.
Architected a data pipeline that integrates archived and current data into a cost-effective Big Data cluster, enabling efficient reporting. Reduced report generation time for 5 million records from 24 hours to 30 minutes.
Developed new features and REST services for the Certification and Curriculum modules based on product requirements.
May 2016 – Dec 2017
Collaborated closely with the business and built a robust framework within Matrix that efficiently calculates valuations for a massive portfolio.
Revamped Deutsche Bank's exposure assessment for 90,000+ listed trades: Implemented advanced back-testing methods resulting in 30% more accurate risk analysis. Utilized agile methodology to optimize back-testing runs, resulting in faster and more accurate results.
Streamlined trade valuation process by 50%: Designed and implemented high-performance Matrix system for 90,000+ trades.
Implemented optimizations using Java, Hazelcast, and multi-threading within a distributed system, resulting in a remarkable increase in trade data processing speed by 48x. This reduction in processing time from 48 hours to 4 not only saved time but also significantly reduced costs, highlighting the effectiveness of advanced data structures and algorithms.
Wrote a Hadoop MapReduce tool to process 1 million trades in the HBase table in 10 minutes (67%-time reduction).
Increased team productivity by 25% and received Recognition and Spot Excellence awards for efficient coding.
Feb 2015 – May 2016
Implemented microservices to streamline core functions: Developed REST APIs for Breeds, Judges, and Events, cutting data access time by 50%.
Developed secure login services using Spring Security for LDAP, GIGYA, and database systems, ensuring user confidentiality and data integrity.
Automated data synchronization, slashing manual effort by 80% and enhancing data consistency across Oracle and MySQL databases.
May 2011 – Feb 2015
Enhanced Messaging campaign delivery streamlined: Created automated data upload for daily updates of us.hsbc.com customer data, improved customer engagement by 30%.
Streamlined campaign data retrieval, saving Content Management team 1 hour per day.
Mar 2010 – Apr 2011
Implemented and customized an Account, Budget, Works, and Billing Monitoring System. Achieved 20% increase in client satisfaction. Custom Billing Module reduced errors by 30% and received commendation for exceeding client expectations.
M.S. Computer Science | City University of Seattle, Seattle, WA
Course Outline: Algorithms, Advanced Parallel Processing, Advanced Operating Systems, Distributed Systems
Sep 2022 – Mar 2024
GPA: 4
PG Diploma Computer Science | Centre for Development of Advanced Computing, India
Course Outline: Data Structures, Databases, Object-Oriented Programming
Jul 2009 – Mar 2010
GPA: 3.2
B.Tech. Computer Science | Ram Meghe Institute of Technology & Research, India
Course Outline: Computer Network, Operating Systems
Jul 2005 – Jan 2009
GPA: 3.5
AWS Certified Solutions Architect – Associate
Cloudera CCA Spark and Hadoop Developer | CCA175 (Apr 2023)
Confluent Certified Developer for Apache Spark | CCDAK (Mar 2020)
Databricks Certified Associate Developer for Apache Spark 2.4 with Scala 2.11 | CRT020 (Dec 2019)
Oracle Java EE 5 Web Component Developer | OCWCD (Oct 2011)
Oracle Certified Professional Java SE 6 Programmer | OCJP (Jun 2011)