A software engineer with a strong foundation in Computer Science, passionate about technology and nature alike, highly determined and resilient, as well as having:
A software engineer with a strong foundation in Computer Science, passionate about technology and nature alike, highly determined and resilient, as well as having:
Owning, developing and maintaining automated Machine Learning pipelines for skills classification, soft skills labeling, and turnover calculations using Python, AWS Batch, Bedrock and SageMaker in collaboration with data scientists and product teams
Leading design, planning, integrations and the release of Machine Learning models end-to-end production pipelines, starting from data ingestion, refinment/cleaning up to delivering the data to the consuming service layer team, all while ensuring seamless deployment and monitoring of model performance with robust observability practices
Integrating diverse data sources consumption and outputs using Amazon Redshift (Serverless), RDS, and S3 to streamline data ingestion for model runs, affecting better efficiency for owned models pipelines
Owning, maintaining and contributing to infrastructure resources using Terraform as well as automating/refining new and existing Machine Learning deployment cycles with GitLab CI, incorporating interanlly-offered package and container image scanning tools like Snyk and ECR, pytest for unit testing, Black for code formatting, and tflint for Terraform linting, enforcing policies and best practices; resulting in at least 40% faster development cycles as witnessed by the team's health metrics
Establishing team-owning initiatives like postmortem analysis to track any production issues that have a customer impact, ensuring the team pinpoints rootcauses and that the team learns from past incidents and implements preventive measures, for example
Working closely with the product manager to define and prioritize features and business requirement, translating them into technical specifications and actionable tasks for the team, ensuring alignment with business goals and customer needs
Support the team by providing constant mentorship to 3 junior Machine Learning engineers through 1:1 recurring weekly meetings, code review and personalized feedback loops, as well as providing guidance and support to 2 senior engineers in various areas
Establishing technical workshops for the team; focusing on technical areas like code explanations, service integrations that are essential for the team's work and requires a deep understanding of the underlying technologies, and best practices for Machine Learning model deployment and monitoring
As a key member of the Source Code Management (SCM) platform team at Fidelity Investments, I manage and optimize GitHub Enterprise operations and integrations, driving efficiency and compliance across the organization. My contributions include:
Developed Python-based automation solutions to streamline GitHub Enterprise maintenance and enforce company policies. I create reusable Python modules and conduct comprehensive code reviews to ensure adherence to best practices.
Spearheaded the automation of integrations between GitHub Enterprise and internal services, collaborating closely with cross-functional teams to design and implement efficient API solutions, ensuring the secure management & compliance of source code repositories by implementing access controls and automating security scans within GitHub Enterprise.
Designed and maintained CI/CD pipelines (Jenkins) for containerized applications, like Mend for code scanning, deployed on Kubernetes clusters, all of which was leveraged by Infrastructure as Code (IaC) tools like Terraform and OpenTofu to manage infrastructure deployment and scaling.
Lead research and analysis efforts to enhance existing automation pipelines, identifying areas for refactoring or redesign. Collaborate with team members to deliver robust, scalable solutions to address evolving business needs.
Actively contributed to continuous improvement initiatives by introducing automation best practices, like linting, unit testing (pytest), and CI/CD optimizations, resulting in faster development cycles and improved code quality.
Implemented monitoring and alerting systems for GitHub Enterprise using tools like Datadog and Splunk; enabling early detection and swift resolution of issues, significantly reducing downtime.
Played a key role in mentoring junior developers within the team, sharing knowledge through 1:1 recurring meetings, code reviews, workshops, and documentation, fostering a culture of learning and technical excellence.
Designed and built services for 3M's DevOps and internal development teams, utilizing Docker, Kubernetes, GitHub Actions, and AWS Systems Manager for CI/CD and configuration management.
Automated essential jobs in development and deployment pipelines using Python, Bash, AWS Lambda, and participated in architectural designs to enhance infrastructure.
Led documentation efforts using Confluence, creating usage guides with draw.io Markdown & README files, and contributed to GitHub Enterprise Server migration for internal development use.
Used Terraform for infrastructure provisioning, enforced coding standards with SonarQube, and promoted high code quality and collaboration through GitHub for code reviews.
Provided expert assistance to customers using AWS DevOps services such as CloudFormation, EKS (Elastic Kubernetes Services), ECS (Elastic Container Service), CodeDeploy, CodePipeline, CodeBuild, and CodeArtifact.
Helped customers automate and design pipelines and tasks using AWS SDKs, CLI, CloudFormation, and other automation services.
Identified, analyzed, and resolved complex technical issues related to AWS DevOps services and their integrations.
Communicated effectively with customers to understand their challenges and provide clear, timely solutions & to optimize the performance and scalability of their CI/CD pipelines and infrastructure while following best practices and compliance requirements within DevOps workflows.
Responded to and managed high-severity incidents, coordinating with multiple teams to ensure swift resolution.
Work closely with AWS engineering and product teams to escalate issues and provide customer feedback for service improvements.
Mentored new team members and provided technical guidance and support to help them succeed in their roles.
Utilized GitHub for source control management and Ansible for automating the configuration of Linux environments tailored for the company’s specific application needs.
For the applications being based on Node, I've used npm for package management and PM2 for process management to streamline the build process, while Jenkins orchestrated automated testing and deployment workflows for different environments.
Administered Jenkins pipelines to facilitate continuous integration and delivery, ensuring seamless updates and robust software delivery cycles.
Diagnosed and resolved deployment and runtime issues with production software using Ansible for configuration fixes and PM2 for monitoring and process management.
Integrated test-driven development (TDD) practices into the development workflow, ensuring high-quality code and efficient testing processes.
Administered School Information Management System (SIMS) that enabled teachers, students and the administration to manage school data and resources efficiently.
Wrote and maintained scripts using Python to automate system administration tasks.
Participated in the school's infrastructure and network design.
Taught IT course for IGCSE and American Section (International) students.
Provided technical support to staff and students, troubleshooting hardware and software issues.
Collaborated with developers, technicians, and stakeholders to ensure the Laboratory Information System (LIS) met technical and operational requirements, customizing the software for specific workflows.
Installed and configured Linux servers, set up PHP environments with MySQL databases, and implemented security protocols including firewalls and SSL for secure LIS operation.
Connected laboratory machines to the LIS, ensured accurate data transfer, and provided training and support to staff, creating guides to enhance system adoption and usability.
Monitored system performance, performed updates, resolved issues, and documented configurations and procedures for compliance and future troubleshooting.
Designed and deployed both wireless and wired network infrastructure for the university, configuring access points, routers, and switches while securing wireless communication with WPA2 protocols.
Implemented VLANs and inter-VLAN routing to enhance network segmentation and efficiency across both wired and wireless networks.
Monitored network performance using SNMP tools, resolved connectivity issues, and optimized traffic flow to ensure reliable and consistent network availability.
Documented the implementation of Layer 1 infrastructure, including cable installation, patch panels, and racks, ensuring compliance with physical network standards.
CorkSec is a monthly meetup (running since 2013) for anyone interested in Information Security in the Munster region (primarily Cork). It is modeled on the idea of a DefCon Group - In fact its got an official Defcon Group code of DC35321.
In this event, I spoke about the importance of container security and some of the best practices to keep containers safe.
Alongside with a demo of some vulnerability scanning tools provided by AWS like ECR basic scanning and AWS Inspector.
Non-profit organization focused on caring for orphans; helping the blind, the deaf, and children with special needs; blood donations poverty alleviation; and literacy training.
Document volunteering events; mainly convoys and multipleday volunteering trips.
Lead a team of photographers and plan events coverage.
Maintain documentation & lead video creation for every event.
Developed a Python wrapper for the GitHub API, leveraging OpenAPI descriptions to generate client code for interacting with GitHub services.
Designed and implemented reusable Python modules for interacting with GitHub services, ensuring consistent and efficient API calls.
Conducted comprehensive code reviews to ensure adherence to best practices and maintain high code quality standards.
Collaborated with cross-functional teams to design and implement efficient API solutions, ensuring secure management and compliance of source code repositories.