Experience
  • May2025 - Sep2025
    CGI
    DevOps Engineer Intern

    Tools: Ansible, GitLab, Docker, JFrog Artifactory, HashiCorp Vault

    At CGI, supporting one of Europe’s largest electricity suppliers, I discovered and implemented DevOps practices in a high-security environment.

    I designed a GitLab CI pipeline to build and publish Docker images to JFrog Artifactory. Team members simply had to update their Dockerfile and commit it to a non-default (feature) branch — the pipeline then automatically built the image and pushed it to the appropriate repository in Artifactory. Depending on the branch, images were routed to QA or to production after a successful merge request, ensuring a controlled and traceable promotion flow while reducing manual steps to near zero.

    I also imported and configured Renovate to automate dependency upgrades. Every weekday at 9 a.m., Renovate scanned tagged GitLab repositories, extracted their dependencies, and checked for new releases. According to the change type (based on Semantic Versioning), it either auto-bumped safe updates or opened Merge Requests for review.

    Last but not least, I built a GitLab CI/CD Catalog with autogenerated changelogs and releases, and developed a GitLab component responsible for triggering Molecule tests on execution environments mirroring production, strengthening reliability before promotion.

  • May2023 - Aug2023
    LIAS Laboratory
    Research Assistant Intern

    Tools: Python, HuggingFace

    During my end-of-bachelor internship at the LIAS Laboratory, I assisted a PhD student and contributed to a research project aimed at improving historians’ search workflows.

    He was involved in the development of a historical atlas of the Nouvelle-Aquitaine region, and with the rapid rise of Large Language Models (LLMs), we explored how they could be applied to answer historian-like questions.

    We began by conducting a state-of-the-art review of LLMs that could be used either online or locally (given hardware constraints), and of the prompting methods available (zero-shot, contextual, etc.). We also created a benchmark of historian-like questions, categorized by type — quantitative, qualitative, open or closed — as well as by thematic area related to the region.

    Finally, we addressed broader research questions such as the interpretability of model-generated answers and their potential usefulness for historians.

    The research paper is available in french and in english.