Skip to content

Hello, I'm Thomas

Thank you for coming here.

I want to write, so that's what I try to do.

I am some kind of a software engineer. Beside that, I also am a father, a husband, a runner, and a violist.

I am mostly curious, open, and keen to try new things.

That's it for now.

Env vars in Github Actions

UPDATE: I made some false conclusions that I correct at the very bottom of this post.

I often use GitHub actions to automate tests, builds, and deployments. Naturally, I often have to deal with configuration values that I don't want to hardcode.

While there exists the option to provide inputs and outputs for Workflows and Actions, it is often cumbersome to pass these values around, especially if that needs to be very often. I had the idea to export some of my input variables to environment variables, so I don't have to re-define them everywhere I go.

In this post, I'll describe a Github Actions workflow together with a custom action that demonstrate how env variables are propagated and where we can use them. What I want to find out is:

  • if and how the environment variables are propagated from a workflow to the inside of an action
  • how I have to access the environment variables inside the action
  • can I use environment variables as default values for action parameters?

The workflow

First, I'll describe the sample workflow.

Inside the workflow, there are two environment variables set. Both will be used inside the action later.

I will also use the action twice, once without any parameters set (to see if the default value can be defined via an env variable) and then by simply overriding it (which actually is a no-brainer that this WILL override any default value, but just to be sure).

on:
  push:

name: Test Workflow
env:
  MY_ENV_1: "my env 1 is set"
  INPUT_DEFAULT_ENV: "input default env is set"

jobs:
  test_job:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout...
        uses: actions/checkout@v4
      - name: run the test action without set parameters
        uses: ./.github/actions/test_action
      - name: run the test action with explicit parameters
        uses: ./.github/actions/test_action
        with:
          input_with_default_env: "overriding the default env"

The action

name: Env var inheritance test
author: Thomas Niebler
inputs:
  input_with_default_env:
    description: "an input variable that has an environment value as default value"
    required: false
    default: $INPUT_DEFAULT_ENV
runs:
  using: composite
  steps:
    - name: Look at the env
      run: |
        echo ${{ env.MY_ENV_1 }}
        echo $MY_ENV_1
        echo input default env: ${{ inputs.input_with_default_env }}
        echo input default env was: $INPUT_DEFAULT_ENV
      shell: bash

The results

step 1

my env 1 is set
my env 1 is set
input default env: input default env is set
input default env was: input default env is set

step 2

my env 1 is set
my env 1 is set
input default env: overriding the default env
input default env was: input default env is set

Conclusion

We can see that the env variables are propagated from the workflow to the action. They can naturally be accessed via ${{ env.VAR_NAME }} or $VAR_NAME.

However, more interesting for me is that I can set default values for action parameters using environment variables from outside the action. For example, I could set a default value for a parameter that is used in multiple actions or workflows, but each time depending on the workflow.

UPDATE: Some false conclusions and their corrections

A few days later, I found out that I drew some false conclusions from the outputs given above.

Concretely, the default value for the input_with_default_env parameter is not set via the environment variable. In fact, it sets the string $INPUT_DEFAULT_ENV as the default value, but does not replace it with the value of the environment variable. However, in the very simple experiment setup that I did, this value is being entered into a bash environment. In detail, the replacement steps work as follows:

  1. The Github action sees that it should run the text

bash echo input default env: ${{ inputs.input_with_default_env }}

in a bash shell. The important thing here is that GitHub actions indeed sees the bash command above as pure text. The only thing it cares about is to replace the ${{ inputs.input_with_default_env }} with the default value defined in the Action's header. This is the string $INPUT_DEFAULT_ENV. So the first step of the Action is to replace the placeholder with the default value, resulting in:

bash echo input default env: $INPUT_DEFAULT_ENV

  1. In the second step, this command with the replaced default value is being executed in a bash shell. Now, this is still a string for the Action, but the bash shell sees the $INPUT_DEFAULT_ENV and replaces it with the value of the environment variable, because it knows that this is an environment variable. This is why the output of the action is input default env: input default env is set.

So, in fact, it is not DIRECTLY possible to propagate environment variables into Actions like this. Still, we can use env variables not as default parameters, but via the ${{ env.VAR_NAME }} syntax.

Container Apps in Azure Cloud - Why and How

Introduction

Containerization has become a popular approach for deploying and managing applications in the cloud. Azure Cloud provides a robust platform for running containerized applications, offering numerous benefits such as scalability, portability, and ease of management.

In this blog post, we will explore the reasons why container apps are a great fit for Azure Cloud and discuss how to leverage Azure services to deploy and manage containerized applications effectively.

Why Container Apps in Azure Cloud?

  1. Scalability: Azure Cloud provides auto-scaling capabilities, allowing container apps to handle varying workloads efficiently. With features like Azure Kubernetes Service (AKS), you can easily scale your containerized applications based on demand.

  2. Portability: Containers offer a consistent runtime environment, making it easier to deploy applications across different environments. Azure Container Registry (ACR) enables you to store and manage container images, ensuring seamless deployment across Azure Cloud.

  3. Isolation and Security: Containers provide isolation between applications, enhancing security and reducing the risk of dependencies conflicts. Azure Container Instances (ACI) and Azure Kubernetes Service (AKS) offer built-in security features, ensuring the safety of your containerized applications.

How to Deploy Container Apps in Azure Cloud

  1. Containerization: Start by containerizing your application using technologies like Docker. Docker allows you to package your application and its dependencies into a single container image.

  2. Azure Container Registry: Push your container image to Azure Container Registry (ACR). ACR provides a secure and private repository for storing container images.

  3. Azure Kubernetes Service: Use Azure Kubernetes Service (AKS) to deploy and manage your containerized applications at scale. AKS simplifies the management of containerized workloads, providing features like automatic scaling, load balancing, and self-healing.

  4. Azure Container Instances: For smaller workloads or quick deployments, you can use Azure Container Instances (ACI). ACI allows you to run containers without managing the underlying infrastructure, making it ideal for lightweight applications.

Conclusion

Container apps in Azure Cloud offer a powerful and flexible solution for deploying and managing applications. With Azure's comprehensive set of services, you can easily leverage the benefits of containerization and build scalable, portable, and secure applications.

Stay tuned for more in-depth articles on specific Azure services and best practices for container apps in the Azure Cloud.


Retrieval-Augmented Generation: A Deep Dive

Retrieval-Augmented Generation (RAG) is a powerful technique that combines the best of retrieval-based and generative methods for machine learning models. It's particularly useful in the field of Natural Language Processing (NLP), where it can be used to create more sophisticated and context-aware AI models.

What is Retrieval-Augmented Generation?

RAG is a method that leverages the strengths of both retrieval-based and generative models. It uses a retriever to fetch relevant documents from a large corpus and then uses a generator to create a response based on the retrieved documents.

How does RAG work?

RAG operates in two main steps: retrieval and generation.

Retrieval

In the retrieval step, the model receives an input (such as a question) and uses a retriever to fetch relevant documents from a large corpus. The retriever is typically a dense vector model, such as Dense Passage Retrieval (DPR), which represents both the input and the documents in the corpus as vectors in a high-dimensional space. The retriever then selects the documents that are closest to the input in this space.

Generation

In the generation step, the model uses a generator to create a response based on the retrieved documents. The generator is typically a sequence-to-sequence model, such as BART or T5, which can generate a coherent and contextually appropriate response.

The key innovation of RAG is that it performs the retrieval and generation steps jointly. This means that the model can adjust its retrieval based on the generation, and vice versa. This allows the model to create more accurate and contextually appropriate responses.

Why is RAG important?

RAG is important because it combines the strengths of retrieval-based and generative models. Retrieval-based models are good at fetching relevant information from a large corpus, but they can struggle to generate coherent and contextually appropriate responses. Generative models, on the other hand, are good at generating responses, but they can struggle to incorporate relevant information from a large corpus.

By combining these two approaches, RAG can create models that are both contextually aware and capable of generating coherent responses. This makes RAG a powerful tool for tasks such as question answering, dialogue systems, and other NLP applications.

Conclusion

Retrieval-Augmented Generation is a powerful technique that combines the strengths of retrieval-based and generative models. By performing retrieval and generation jointly, RAG can create more accurate and contextually appropriate responses. This makes it a valuable tool for a wide range of NLP applications.

Comparative Analysis of AWS API Gateway and FastAPI

When it comes to building and managing APIs, developers have a plethora of options to choose from. In this article, we will compare two popular choices: AWS API Gateway and FastAPI.

AWS API Gateway

AWS API Gateway is a fully managed service that makes it easy to create, deploy, and manage APIs at any scale. It provides features like authentication, authorization, caching, and monitoring out of the box. With AWS API Gateway, you can build RESTful APIs, WebSocket APIs, and HTTP APIs.

Some key features of AWS API Gateway include:

  • Easy integration with other AWS services like Lambda, DynamoDB, and S3.
  • Support for API versioning and stage management.
  • Fine-grained access control using IAM roles and policies.
  • Built-in request and response transformations.
  • Detailed monitoring and logging capabilities.

FastAPI

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.7+ based on standard Python type hints. It is designed to be easy to use and highly efficient. FastAPI leverages the power of asynchronous programming to provide high-performance APIs.

Some key features of FastAPI include:

  • Automatic generation of interactive API documentation with Swagger UI and ReDoc.
  • Fast request and response serialization using Pydantic models.
  • Support for asynchronous request handlers using async/await syntax.
  • Built-in support for OAuth2 authentication and JWT tokens.
  • Integration with popular databases like SQLAlchemy and Tortoise ORM.

Comparative Analysis

Now let's compare AWS API Gateway and FastAPI based on various factors:

  1. Ease of Use: AWS API Gateway provides a user-friendly interface and seamless integration with other AWS services. FastAPI, on the other hand, offers a simple and intuitive API design with automatic documentation generation.

  2. Performance: FastAPI is known for its high-performance capabilities due to its asynchronous nature. AWS API Gateway also offers good performance, but it may introduce some latency due to its managed nature.

  3. Scalability: AWS API Gateway is a fully managed service that can scale automatically based on the incoming traffic. FastAPI can also scale horizontally by deploying multiple instances behind a load balancer.

  4. Flexibility: FastAPI provides more flexibility in terms of customization and control over the API implementation. AWS API Gateway, being a managed service, has some limitations in terms of customization.

  5. Cost: AWS API Gateway pricing is based on the number of API calls, data transfer, and additional features used. FastAPI, being an open-source framework, has no direct cost associated with it.

In conclusion, both AWS API Gateway and FastAPI are powerful tools for building APIs, but they cater to different use cases. AWS API Gateway is a fully managed service suitable for large-scale deployments with seamless integration with other AWS services. FastAPI, on the other hand, is a lightweight and high-performance framework that provides flexibility and control over the API implementation.

When choosing between the two, consider factors such as ease of use, performance requirements, scalability needs, flexibility, and cost. Ultimately, the choice depends on your specific project requirements and preferences.

Incorporating Domain Knowledge into LLMs

Lately, I have been working with LLMs (who has not?). Obviously, LLMs are quite nice for a variety of tasks, especially those that are concerned with communication with humans. One then quickly arrives at a point where highly specific domain knowledge has to be incorporated into the communication process, since this knowledge is usually lacking from generalized text corpora that LLMs are usually trained on. In this article, we'll explore some methods to bridge that gap and make LLMs more knowledgeable in specific domains.

1. Pre-training with Domain-Specific Data

One way to incorporate domain knowledge into LLMs is by pre-training them with domain-specific data. By exposing the model to a large corpus of text from the target domain, it can learn the specific vocabulary, grammar, and nuances of that domain. This helps the model generate more accurate and contextually relevant text in that domain.

However, this needs a large corpus of domain-specific knowledge, which is rarely available in a sufficient size.

2. Fine-tuning on Domain-Specific Tasks

Another approach is to fine-tune the pre-trained LLM on domain-specific tasks. By training the model on specific tasks related to the target domain, such as sentiment analysis or named entity recognition, it can learn to understand and generate text that aligns with the requirements of those tasks. This fine-tuning process helps the model acquire domain-specific knowledge and improve its performance in that domain.

Still, this doesn't help much with incorporating actual knowledge into the LLM.

3. Incorporating External Knowledge Sources

LLMs can also benefit from incorporating external knowledge sources. This can be done by integrating domain-specific knowledge bases, ontologies, or even expert-curated datasets into the model. By leveraging this external knowledge, the model can generate more accurate and informed text that aligns with the domain's concepts, facts, and context.

While this a very promising approach, we still lack large domain-specific knowledge bases, as specialized knowledge is mostly in people's heads (and let's be honest, also guts) instead of a formalized knowledge base.

4. Human-in-the-Loop Approach

In some cases, incorporating domain knowledge into LLMs may require a human-in-the-loop approach. This involves having domain experts review and provide feedback on the generated text. By iteratively refining the model based on human feedback, the LLM can gradually improve its understanding and generation capabilities in the specific domain.

5. Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) is another approach to incorporate domain knowledge into Language Learning Models (LLMs). RAG combines the benefits of pre-trained transformers and information retrieval systems.

In the RAG approach, when a query is given, the model retrieves relevant documents from a knowledge source and then uses this information to generate a response. This allows the model to pull in external, domain-specific knowledge when generating text. The advantage of RAG is that it can leverage vast amounts of information without needing to have all of it in its training data. This makes it particularly useful for tasks where the required knowledge may not be present in the pre-training corpus.

However, the effectiveness of RAG depends on the quality and relevance of the retrieved documents. Therefore, it's crucial to have a well-structured and comprehensive knowledge source for the retrieval process.

Conclusion

In conclusion, incorporating domain knowledge into LLMs is crucial for making them more effective and reliable in specific domains. Whether it's through pre-training, fine-tuning, leveraging external knowledge sources, or involving human experts, these methods help LLMs become more knowledgeable and contextually aware. By bridging the gap between general language understanding and domain-specific expertise, we can unlock the full potential of LLMs in various applications.

Rebuilding my page

The time has come to revive my page and this blog. Why not start with rebuilding the whole thing?

Why?

Because I felt that the custom-built solution I had before was too hacky to maintain. Also, I discovered mkdocs, mkdocs-material and its nice blog feature (see https://squidfunk.github.io/mkdocs-material/)

Since I wasn't that much interested in running a Python-based service anymore and even preferred some kind of static page, using mkdocs was the obvious solution. Besides, I wanted to tinker around a bit.

What?

mkdocs is actually a tool to - you guessed it - create some kind of HTML documentation from a large pile of Markdown files. Using some plugins, you can extend its basic webpage design and behaviour. For example, mkdocs-material not only provides a nice look and feel to the blog, it also has an out-of-the-box blog plugin. You can see the result here.

Where?

I also moved the whole thing to GitHub (yes, from GitLab - I'm just not using GitLab so much anymore), so the CI pipeline will run on GitHub actions. I didn't feel like using GitHub pages yet.

Jinja in LaTeX

When updating your CV, you usually want to get your information straight across several systems. For example, I want to convey the same information on

  • my webpage
  • my LinkedIn/Xing profile
  • my CV as a PDF

However, when updating this info in a single one of these systems, I also need to update all the other systems manually. But I am not a computer guy because I want to do things manually. I have a LaTeX CV (as should everyone) and I know a bit about Python. So one obvious option for me was to implement the macro bits in LaTeX and then fill them out using Python. The necessary information that I need can then be gathered from any kind of data source that Python has access to (read: database, API, text files, you name it).

So, let's start with some juicy bits.

I recently wrote my own CV document class so that my main document basically looks like this (Note to myself: I should publish that somewhere):

\documentclass[a4paper,10pt]{cv-superior}
\usepackage[utf8]{inputenc}

\begin{document}
\defcustomlength{\headerheight}{3.6cm}
\defcustomlength{\contentheight}{\dimexpr\paperheight - \headerheight\relax}
\defcustomlength{\leftblockwidth}{.3\paperwidth}
\defcustomlength{\rightblockwidth}{\dimexpr\paperwidth - \leftblockwidth\relax}
\defcustomlength{\entrysep}{.5cm}

\begin{tikzpicture}[remember picture,overlay]

\header{THOMAS}{NIEBLER}{Software Architect}
\section{PROFILE}
\begin{entrylist}
    \plaintextentry{...}
\end{entrylist}

\section{EXPERIENCE}
\begin{entrylist}
    \listentryemployer{Bosch Rexroth AG}{Apr' 19 -- now}

    \listentry{Software Architect}
    {Bosch Rexroth AG}
    {Lohr am Main, Germany}
    {Feb '21 -- now}
    {
        \begin{itemize}
            \item ...
        \end{itemize}
    }

    % some more list entries, eventually with different employers
\end{entrylist}

\section{COMPETENCIES}
\begin{entrylist}
    \simplelistentry{Technical}{
        \begin{itemize}
            \item
        \end{itemize}
    }

    % some more entries like this
\end{entrylist}

% Everything that is a smallsection needs a smallentrylist and appears on the left hand side.
\smallsection{EDUCATION}
\begin{smallentrylist}
    \smalllistentry{PhD, Computer Science}
    {University of Würzburg}
    {Mar '19}

    % ...
\end{smallentrylist}

\smallsection{TECHNOLOGIES}
\begin{smallentrylist}
    \skills{
    LaTeX tinkering/over 9000,
    other stuff/5
    }
\end{smallentrylist}
\end{tikzpicture}
\end{document}

This is for now pure LaTeX. However, with those macros, the whole document generation can easily be done through Jinja templating.

First off, what is Jinja templating?

Using Jinja templating, we can insert some kind of placeholders ("templates") into a document. This is e.g. done in web documents (Jinja is for example used within the Flask framework), but according to the Jinja webpage, it can be used with any kind of text file (yes, it also says LaTeX).

Rendering Jinja templates is a breeze using the Jinja API when using simple HTML pages (example shamelessly copied from the API page):

from jinja2 import Environment, PackageLoader, select_autoescape
env = Environment(
    loader=PackageLoader("yourapp")
)
my_variable_dict = {
    "my": "variables",
    "are": "defined",
    "right": "here",
    "this_should": "be filled",
    "with some": "python magic"
}

template = env.get_template("mytemplate.html")

output = template.render(**my_variable_dict)

An exemplary yourapp/templates/mytemplate.html could look like this:

<html>
<body>
My {{my}} are {{are}} {% if right is "here" %}
right here
{% else %}
somewhere else
{% endif %} ...
</body>
</html>

And the final result residing in output would look like this:

<html>
<body>
My variables are defined right here ...
</body>
</html>

Now, rendering a Jinja template in LaTeX is a little bit more tricky, as we cannot easily use curled braces and percentage signs for our template indicators, as those signs are also part of LaTeX's syntax. Just imagine a block opening with a comment directly following it, e.g.:

\textbf{%
some bold text
}

This looks like a block opening to Jinja, causing quite some confusion (and obviously: no rendered documents).

Luckily, we just have to adjust the Environment instantiation a bit, for example:

env = Environment(
    loader=PackageLoader("yourapp"),
    block_start_string='<BLOCK>',
    block_end_string='</BLOCK>',
    variable_start_string='<VAR>',
    variable_end_string='</VAR>',
)


With this, we are now ready to render the following LaTeX template:

\documentclass{a4paper, 10pt}

\begin{document}

<BLOCK>for x in range(5)</BLOCK>
  \textbf{<VAR>x</VAR>}\\
<BLOCK>endfor</BLOCK>

\end{document}

resulting in the syntactically perfectly valid LaTeX document:

\documentclass{a4paper, 10pt}

\begin{document}
  \textbf{0}\\
  \textbf{1}\\
  \textbf{2}\\
  \textbf{3}\\
  \textbf{4}\\

\end{document}

The used example code can be found in this GitHub repository.

Going back to my CV, my final document would now look like this:

\documentclass[a4paper,10pt]{cv-superior}
\usepackage[utf8]{inputenc}

\begin{document}
\defcustomlength{\headerheight}{3.6cm}
\defcustomlength{\contentheight}{\dimexpr\paperheight - \headerheight\relax}
\defcustomlength{\leftblockwidth}{.3\paperwidth}
\defcustomlength{\rightblockwidth}{\dimexpr\paperwidth - \leftblockwidth\relax}
\defcustomlength{\entrysep}{.5cm}

\begin{tikzpicture}[remember picture,overlay]

\header{THOMAS}{NIEBLER}{Software Architect}
\section{PROFILE}
\begin{entrylist}
    \plaintextentry{<VAR>profiletext</VAR>}
\end{entrylist}

\section{EXPERIENCE}
\begin{entrylist}
<BLOCK>for entry in experience_list</BLOCK>
    <BLOCK>if entry.employer_with_several_roles_first_role</BLOCK>
    \listentryemployer{<VAR>entry.employer</VAR>}{<VAR>entry.employer_start</VAR> -- <VAR>entry.employer_end</VAR>}
    <BLOCK>endif</BLOCK>
    \listentry{<VAR>entry.role</VAR>}
    {<VAR>entry.employer</VAR>}
    {<VAR>entry.location</VAR>}
    {<VAR>entry.role_start</VAR> -- <VAR>entry.end</VAR>}
    {
        <VAR>entry.content</VAR>
    }
<BLOCK>endfor</BLOCK>
\end{entrylist}

% and so on...

\end{tikzpicture}
\end{document}

So I'd simply loop over all my entries that I had received from some kind of data origin and that's it. To be honest, the final solution is not as practical as I thought it to be as I had other problems afterwards, especially with compiling the code automatically. In the end, Overleaf was a way more convenient alternative.

Some lessons learned as a Software Architect

I started out in the role of a Software Architect roughly one year ago. Here I found that my previous image of architecture work does differ quite a bit from what was actually expected of me.

Correction #1: Shifting around blocks is a task that comes only very late

I originally thought that the task of an architect is to devise the architecture of a software system, i.e. shifting around different blocks of functionality. While this is not entirely wrong, it comprises only a small part of what should actually be done. The task of a software architect is to make sure that an intended software product does fulfill its task and can be developed.

Correction #2: Titles are (actually) meaningless

Yes, I hold a PhD. Does not matter a bit here. People love to assign themselves fancy titles. Do not pay attention to these self-assigned titles. However, pay close attention to what they say and do. If it overlaps, great. If there's an obvious discrepancy, try to stay away.

Correction #3: Talk non-nerdy to me

Software Architects must be great communicators, yes. Be aware who you talk to! While you were chosen (amongst the other great qualities you surely possess) because are technically very proficient, most times, you will communicate with people on the far end of the proficiency spectrum. Talk to them with very easy words and maybe slightly inaccurate descriptions, as long as it helps to make them understand. Technical details are for developers.

Correction #4: Software is only a large part of your architecture

As a software architect, you design the architecture of a piece of software. The decisions you take while designing are however heavily influenced by the ecosystem where the software will be used in. This ecosystem can be entirely non-soft, for example the hardware parts of a car's sensor and actor infrastructure, or even something entirely unpredictable, like the human that is attempting to drive the car.

How this Blog Works internally

Just a few notes how my blog works internally.

Flask Server

The blog backend is written in Flask, a rather minimalist web framework in Python. As there are no users here, I don't need any authorization. I do not have a database backend. Anything I need is available via Web APIs.

Markdown

I write my posts in Markdown. To parse them and display them as HTML, I use the markdown2 Python package, which supports a few extras compared to Vanilla markdown, but is also still rather basic. Still, the parser is very fast and reliable.

Some Custom Stuff

I have added a few functionalities, e.g.

  • LaTeX formulas are rendered with QuickLaTeX (see the corresponding blog post for this)
  • Bibliographic references are rendered using my BibSonomy collection of papers. BibSonomy offers quite a few export formats which makes it ideal as a centralized paper repository (see e.g. how I use it in LaTeX)

GitLab CI

Publishing new posts works like a charm thanks to GitLab's awesome CI functionalities. Every time I push to my blog repository, it automatically deploys everything to my webspace and restarts the web service. This way, I do not need any edit functionalities in my blog, can quickly experiment with new features on my development machine (without breaking production) and always have a backup and history. If that isn't awesome, nothing is.