Skip to content

Hello, I'm Thomas

Thank you for coming here.

I want to write, so that's what I try to do.

I am some kind of a software engineer. Beside that, I also am a father, a husband, a runner, and a violist.

I am mostly curious, open, and keen to try new things.

That's it for now.

Container Apps in Azure Cloud - Why and How

Introduction

Containerization has become a popular approach for deploying and managing applications in the cloud. Azure Cloud provides a robust platform for running containerized applications, offering numerous benefits such as scalability, portability, and ease of management.

In this blog post, we will explore the reasons why container apps are a great fit for Azure Cloud and discuss how to leverage Azure services to deploy and manage containerized applications effectively.

Why Container Apps in Azure Cloud?

  1. Scalability: Azure Cloud provides auto-scaling capabilities, allowing container apps to handle varying workloads efficiently. With features like Azure Kubernetes Service (AKS), you can easily scale your containerized applications based on demand.

  2. Portability: Containers offer a consistent runtime environment, making it easier to deploy applications across different environments. Azure Container Registry (ACR) enables you to store and manage container images, ensuring seamless deployment across Azure Cloud.

  3. Isolation and Security: Containers provide isolation between applications, enhancing security and reducing the risk of dependencies conflicts. Azure Container Instances (ACI) and Azure Kubernetes Service (AKS) offer built-in security features, ensuring the safety of your containerized applications.

How to Deploy Container Apps in Azure Cloud

  1. Containerization: Start by containerizing your application using technologies like Docker. Docker allows you to package your application and its dependencies into a single container image.

  2. Azure Container Registry: Push your container image to Azure Container Registry (ACR). ACR provides a secure and private repository for storing container images.

  3. Azure Kubernetes Service: Use Azure Kubernetes Service (AKS) to deploy and manage your containerized applications at scale. AKS simplifies the management of containerized workloads, providing features like automatic scaling, load balancing, and self-healing.

  4. Azure Container Instances: For smaller workloads or quick deployments, you can use Azure Container Instances (ACI). ACI allows you to run containers without managing the underlying infrastructure, making it ideal for lightweight applications.

Conclusion

Container apps in Azure Cloud offer a powerful and flexible solution for deploying and managing applications. With Azure's comprehensive set of services, you can easily leverage the benefits of containerization and build scalable, portable, and secure applications.

Stay tuned for more in-depth articles on specific Azure services and best practices for container apps in the Azure Cloud.


Retrieval-Augmented Generation: A Deep Dive

Retrieval-Augmented Generation (RAG) is a powerful technique that combines the best of retrieval-based and generative methods for machine learning models. It's particularly useful in the field of Natural Language Processing (NLP), where it can be used to create more sophisticated and context-aware AI models.

What is Retrieval-Augmented Generation?

RAG is a method that leverages the strengths of both retrieval-based and generative models. It uses a retriever to fetch relevant documents from a large corpus and then uses a generator to create a response based on the retrieved documents.

How does RAG work?

RAG operates in two main steps: retrieval and generation.

Retrieval

In the retrieval step, the model receives an input (such as a question) and uses a retriever to fetch relevant documents from a large corpus. The retriever is typically a dense vector model, such as Dense Passage Retrieval (DPR), which represents both the input and the documents in the corpus as vectors in a high-dimensional space. The retriever then selects the documents that are closest to the input in this space.

Generation

In the generation step, the model uses a generator to create a response based on the retrieved documents. The generator is typically a sequence-to-sequence model, such as BART or T5, which can generate a coherent and contextually appropriate response.

The key innovation of RAG is that it performs the retrieval and generation steps jointly. This means that the model can adjust its retrieval based on the generation, and vice versa. This allows the model to create more accurate and contextually appropriate responses.

Why is RAG important?

RAG is important because it combines the strengths of retrieval-based and generative models. Retrieval-based models are good at fetching relevant information from a large corpus, but they can struggle to generate coherent and contextually appropriate responses. Generative models, on the other hand, are good at generating responses, but they can struggle to incorporate relevant information from a large corpus.

By combining these two approaches, RAG can create models that are both contextually aware and capable of generating coherent responses. This makes RAG a powerful tool for tasks such as question answering, dialogue systems, and other NLP applications.

Conclusion

Retrieval-Augmented Generation is a powerful technique that combines the strengths of retrieval-based and generative models. By performing retrieval and generation jointly, RAG can create more accurate and contextually appropriate responses. This makes it a valuable tool for a wide range of NLP applications.

Comparative Analysis of AWS API Gateway and FastAPI

When it comes to building and managing APIs, developers have a plethora of options to choose from. In this article, we will compare two popular choices: AWS API Gateway and FastAPI.

AWS API Gateway

AWS API Gateway is a fully managed service that makes it easy to create, deploy, and manage APIs at any scale. It provides features like authentication, authorization, caching, and monitoring out of the box. With AWS API Gateway, you can build RESTful APIs, WebSocket APIs, and HTTP APIs.

Some key features of AWS API Gateway include:

  • Easy integration with other AWS services like Lambda, DynamoDB, and S3.
  • Support for API versioning and stage management.
  • Fine-grained access control using IAM roles and policies.
  • Built-in request and response transformations.
  • Detailed monitoring and logging capabilities.

FastAPI

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.7+ based on standard Python type hints. It is designed to be easy to use and highly efficient. FastAPI leverages the power of asynchronous programming to provide high-performance APIs.

Some key features of FastAPI include:

  • Automatic generation of interactive API documentation with Swagger UI and ReDoc.
  • Fast request and response serialization using Pydantic models.
  • Support for asynchronous request handlers using async/await syntax.
  • Built-in support for OAuth2 authentication and JWT tokens.
  • Integration with popular databases like SQLAlchemy and Tortoise ORM.

Comparative Analysis

Now let's compare AWS API Gateway and FastAPI based on various factors:

  1. Ease of Use: AWS API Gateway provides a user-friendly interface and seamless integration with other AWS services. FastAPI, on the other hand, offers a simple and intuitive API design with automatic documentation generation.

  2. Performance: FastAPI is known for its high-performance capabilities due to its asynchronous nature. AWS API Gateway also offers good performance, but it may introduce some latency due to its managed nature.

  3. Scalability: AWS API Gateway is a fully managed service that can scale automatically based on the incoming traffic. FastAPI can also scale horizontally by deploying multiple instances behind a load balancer.

  4. Flexibility: FastAPI provides more flexibility in terms of customization and control over the API implementation. AWS API Gateway, being a managed service, has some limitations in terms of customization.

  5. Cost: AWS API Gateway pricing is based on the number of API calls, data transfer, and additional features used. FastAPI, being an open-source framework, has no direct cost associated with it.

In conclusion, both AWS API Gateway and FastAPI are powerful tools for building APIs, but they cater to different use cases. AWS API Gateway is a fully managed service suitable for large-scale deployments with seamless integration with other AWS services. FastAPI, on the other hand, is a lightweight and high-performance framework that provides flexibility and control over the API implementation.

When choosing between the two, consider factors such as ease of use, performance requirements, scalability needs, flexibility, and cost. Ultimately, the choice depends on your specific project requirements and preferences.

Incorporating Domain Knowledge into LLMs

Lately, I have been working with LLMs (who has not?). Obviously, LLMs are quite nice for a variety of tasks, especially those that are concerned with communication with humans. One then quickly arrives at a point where highly specific domain knowledge has to be incorporated into the communication process, since this knowledge is usually lacking from generalized text corpora that LLMs are usually trained on. In this article, we'll explore some methods to bridge that gap and make LLMs more knowledgeable in specific domains.

1. Pre-training with Domain-Specific Data

One way to incorporate domain knowledge into LLMs is by pre-training them with domain-specific data. By exposing the model to a large corpus of text from the target domain, it can learn the specific vocabulary, grammar, and nuances of that domain. This helps the model generate more accurate and contextually relevant text in that domain.

However, this needs a large corpus of domain-specific knowledge, which is rarely available in a sufficient size.

2. Fine-tuning on Domain-Specific Tasks

Another approach is to fine-tune the pre-trained LLM on domain-specific tasks. By training the model on specific tasks related to the target domain, such as sentiment analysis or named entity recognition, it can learn to understand and generate text that aligns with the requirements of those tasks. This fine-tuning process helps the model acquire domain-specific knowledge and improve its performance in that domain.

Still, this doesn't help much with incorporating actual knowledge into the LLM.

3. Incorporating External Knowledge Sources

LLMs can also benefit from incorporating external knowledge sources. This can be done by integrating domain-specific knowledge bases, ontologies, or even expert-curated datasets into the model. By leveraging this external knowledge, the model can generate more accurate and informed text that aligns with the domain's concepts, facts, and context.

While this a very promising approach, we still lack large domain-specific knowledge bases, as specialized knowledge is mostly in people's heads (and let's be honest, also guts) instead of a formalized knowledge base.

4. Human-in-the-Loop Approach

In some cases, incorporating domain knowledge into LLMs may require a human-in-the-loop approach. This involves having domain experts review and provide feedback on the generated text. By iteratively refining the model based on human feedback, the LLM can gradually improve its understanding and generation capabilities in the specific domain.

5. Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) is another approach to incorporate domain knowledge into Language Learning Models (LLMs). RAG combines the benefits of pre-trained transformers and information retrieval systems.

In the RAG approach, when a query is given, the model retrieves relevant documents from a knowledge source and then uses this information to generate a response. This allows the model to pull in external, domain-specific knowledge when generating text. The advantage of RAG is that it can leverage vast amounts of information without needing to have all of it in its training data. This makes it particularly useful for tasks where the required knowledge may not be present in the pre-training corpus.

However, the effectiveness of RAG depends on the quality and relevance of the retrieved documents. Therefore, it's crucial to have a well-structured and comprehensive knowledge source for the retrieval process.

Conclusion

In conclusion, incorporating domain knowledge into LLMs is crucial for making them more effective and reliable in specific domains. Whether it's through pre-training, fine-tuning, leveraging external knowledge sources, or involving human experts, these methods help LLMs become more knowledgeable and contextually aware. By bridging the gap between general language understanding and domain-specific expertise, we can unlock the full potential of LLMs in various applications.

Rebuilding my page

The time has come to revive my page and this blog. Why not start with rebuilding the whole thing?

Why?

Because I felt that the custom-built solution I had before was too hacky to maintain. Also, I discovered mkdocs, mkdocs-material and its nice blog feature (see https://squidfunk.github.io/mkdocs-material/)

Since I wasn't that much interested in running a Python-based service anymore and even preferred some kind of static page, using mkdocs was the obvious solution. Besides, I wanted to tinker around a bit.

What?

mkdocs is actually a tool to - you guessed it - create some kind of HTML documentation from a large pile of Markdown files. Using some plugins, you can extend its basic webpage design and behaviour. For example, mkdocs-material not only provides a nice look and feel to the blog, it also has an out-of-the-box blog plugin. You can see the result here.

Where?

I also moved the whole thing to GitHub (yes, from GitLab - I'm just not using GitLab so much anymore), so the CI pipeline will run on GitHub actions. I didn't feel like using GitHub pages yet.

Jinja in LaTeX

When updating your CV, you usually want to get your information straight across several systems. For example, I want to convey the same information on

  • my webpage
  • my LinkedIn/Xing profile
  • my CV as a PDF

However, when updating this info in a single one of these systems, I also need to update all the other systems manually. But I am not a computer guy because I want to do things manually. I have a LaTeX CV (as should everyone) and I know a bit about Python. So one obvious option for me was to implement the macro bits in LaTeX and then fill them out using Python. The necessary information that I need can then be gathered from any kind of data source that Python has access to (read: database, API, text files, you name it).

So, let's start with some juicy bits.

I recently wrote my own CV document class so that my main document basically looks like this (Note to myself: I should publish that somewhere):

\documentclass[a4paper,10pt]{cv-superior}
\usepackage[utf8]{inputenc}

\begin{document}
\defcustomlength{\headerheight}{3.6cm}
\defcustomlength{\contentheight}{\dimexpr\paperheight - \headerheight\relax}
\defcustomlength{\leftblockwidth}{.3\paperwidth}
\defcustomlength{\rightblockwidth}{\dimexpr\paperwidth - \leftblockwidth\relax}
\defcustomlength{\entrysep}{.5cm}

\begin{tikzpicture}[remember picture,overlay]

\header{THOMAS}{NIEBLER}{Software Architect}
\section{PROFILE}
\begin{entrylist}
    \plaintextentry{...}
\end{entrylist}

\section{EXPERIENCE}
\begin{entrylist}
    \listentryemployer{Bosch Rexroth AG}{Apr' 19 -- now}

    \listentry{Software Architect}
    {Bosch Rexroth AG}
    {Lohr am Main, Germany}
    {Feb '21 -- now}
    {
        \begin{itemize}
            \item ...
        \end{itemize}
    }

    % some more list entries, eventually with different employers
\end{entrylist}

\section{COMPETENCIES}
\begin{entrylist}
    \simplelistentry{Technical}{
        \begin{itemize}
            \item
        \end{itemize}
    }

    % some more entries like this
\end{entrylist}

% Everything that is a smallsection needs a smallentrylist and appears on the left hand side.
\smallsection{EDUCATION}
\begin{smallentrylist}
    \smalllistentry{PhD, Computer Science}
    {University of Würzburg}
    {Mar '19}

    % ...
\end{smallentrylist}

\smallsection{TECHNOLOGIES}
\begin{smallentrylist}
    \skills{
    LaTeX tinkering/over 9000,
    other stuff/5
    }
\end{smallentrylist}
\end{tikzpicture}
\end{document}

This is for now pure LaTeX. However, with those macros, the whole document generation can easily be done through Jinja templating.

First off, what is Jinja templating?

Using Jinja templating, we can insert some kind of placeholders ("templates") into a document. This is e.g. done in web documents (Jinja is for example used within the Flask framework), but according to the Jinja webpage, it can be used with any kind of text file (yes, it also says LaTeX).

Rendering Jinja templates is a breeze using the Jinja API when using simple HTML pages (example shamelessly copied from the API page):

from jinja2 import Environment, PackageLoader, select_autoescape
env = Environment(
    loader=PackageLoader("yourapp")
)
my_variable_dict = {
    "my": "variables",
    "are": "defined",
    "right": "here",
    "this_should": "be filled",
    "with some": "python magic"
}

template = env.get_template("mytemplate.html")

output = template.render(**my_variable_dict)

An exemplary yourapp/templates/mytemplate.html could look like this:

<html>
<body>
My {{my}} are {{are}} {% if right is "here" %}
right here
{% else %}
somewhere else
{% endif %} ...
</body>
</html>

And the final result residing in output would look like this:

<html>
<body>
My variables are defined right here ...
</body>
</html>

Now, rendering a Jinja template in LaTeX is a little bit more tricky, as we cannot easily use curled braces and percentage signs for our template indicators, as those signs are also part of LaTeX's syntax. Just imagine a block opening with a comment directly following it, e.g.:

\textbf{%
some bold text
}

This looks like a block opening to Jinja, causing quite some confusion (and obviously: no rendered documents).

Luckily, we just have to adjust the Environment instantiation a bit, for example:

env = Environment(
    loader=PackageLoader("yourapp"),
    block_start_string='<BLOCK>',
    block_end_string='</BLOCK>',
    variable_start_string='<VAR>',
    variable_end_string='</VAR>',
)


With this, we are now ready to render the following LaTeX template:

\documentclass{a4paper, 10pt}

\begin{document}

<BLOCK>for x in range(5)</BLOCK>
  \textbf{<VAR>x</VAR>}\\
<BLOCK>endfor</BLOCK>

\end{document}

resulting in the syntactically perfectly valid LaTeX document:

\documentclass{a4paper, 10pt}

\begin{document}
  \textbf{0}\\
  \textbf{1}\\
  \textbf{2}\\
  \textbf{3}\\
  \textbf{4}\\

\end{document}

The used example code can be found in this GitHub repository.

Going back to my CV, my final document would now look like this:

\documentclass[a4paper,10pt]{cv-superior}
\usepackage[utf8]{inputenc}

\begin{document}
\defcustomlength{\headerheight}{3.6cm}
\defcustomlength{\contentheight}{\dimexpr\paperheight - \headerheight\relax}
\defcustomlength{\leftblockwidth}{.3\paperwidth}
\defcustomlength{\rightblockwidth}{\dimexpr\paperwidth - \leftblockwidth\relax}
\defcustomlength{\entrysep}{.5cm}

\begin{tikzpicture}[remember picture,overlay]

\header{THOMAS}{NIEBLER}{Software Architect}
\section{PROFILE}
\begin{entrylist}
    \plaintextentry{<VAR>profiletext</VAR>}
\end{entrylist}

\section{EXPERIENCE}
\begin{entrylist}
<BLOCK>for entry in experience_list</BLOCK>
    <BLOCK>if entry.employer_with_several_roles_first_role</BLOCK>
    \listentryemployer{<VAR>entry.employer</VAR>}{<VAR>entry.employer_start</VAR> -- <VAR>entry.employer_end</VAR>}
    <BLOCK>endif</BLOCK>
    \listentry{<VAR>entry.role</VAR>}
    {<VAR>entry.employer</VAR>}
    {<VAR>entry.location</VAR>}
    {<VAR>entry.role_start</VAR> -- <VAR>entry.end</VAR>}
    {
        <VAR>entry.content</VAR>
    }
<BLOCK>endfor</BLOCK>
\end{entrylist}

% and so on...

\end{tikzpicture}
\end{document}

So I'd simply loop over all my entries that I had received from some kind of data origin and that's it. To be honest, the final solution is not as practical as I thought it to be as I had other problems afterwards, especially with compiling the code automatically. In the end, Overleaf was a way more convenient alternative.

Some lessons learned as a Software Architect

I started out in the role of a Software Architect roughly one year ago. Here I found that my previous image of architecture work does differ quite a bit from what was actually expected of me.

Correction #1: Shifting around blocks is a task that comes only very late

I originally thought that the task of an architect is to devise the architecture of a software system, i.e. shifting around different blocks of functionality. While this is not entirely wrong, it comprises only a small part of what should actually be done. The task of a software architect is to make sure that an intended software product does fulfill its task and can be developed.

Correction #2: Titles are (actually) meaningless

Yes, I hold a PhD. Does not matter a bit here. People love to assign themselves fancy titles. Do not pay attention to these self-assigned titles. However, pay close attention to what they say and do. If it overlaps, great. If there's an obvious discrepancy, try to stay away.

Correction #3: Talk non-nerdy to me

Software Architects must be great communicators, yes. Be aware who you talk to! While you were chosen (amongst the other great qualities you surely possess) because are technically very proficient, most times, you will communicate with people on the far end of the proficiency spectrum. Talk to them with very easy words and maybe slightly inaccurate descriptions, as long as it helps to make them understand. Technical details are for developers.

Correction #4: Software is only a large part of your architecture

As a software architect, you design the architecture of a piece of software. The decisions you take while designing are however heavily influenced by the ecosystem where the software will be used in. This ecosystem can be entirely non-soft, for example the hardware parts of a car's sensor and actor infrastructure, or even something entirely unpredictable, like the human that is attempting to drive the car.

How this Blog Works internally

Just a few notes how my blog works internally.

Flask Server

The blog backend is written in Flask, a rather minimalist web framework in Python. As there are no users here, I don't need any authorization. I do not have a database backend. Anything I need is available via Web APIs.

Markdown

I write my posts in Markdown. To parse them and display them as HTML, I use the markdown2 Python package, which supports a few extras compared to Vanilla markdown, but is also still rather basic. Still, the parser is very fast and reliable.

Some Custom Stuff

I have added a few functionalities, e.g.

  • LaTeX formulas are rendered with QuickLaTeX (see the corresponding blog post for this)
  • Bibliographic references are rendered using my BibSonomy collection of papers. BibSonomy offers quite a few export formats which makes it ideal as a centralized paper repository (see e.g. how I use it in LaTeX)

GitLab CI

Publishing new posts works like a charm thanks to GitLab's awesome CI functionalities. Every time I push to my blog repository, it automatically deploys everything to my webspace and restarts the web service. This way, I do not need any edit functionalities in my blog, can quickly experiment with new features on my development machine (without breaking production) and always have a backup and history. If that isn't awesome, nothing is.

Marrying BibSonomy and BibLaTeX

(This is a rewrite of an older blog post of mine that disappeared together with that awful WordPress page)

I manage my scientific references in BibSonomy. With a standard LaTeX+BibTeX setup, I would have to download my bibliography and save it into a file. When doing this again for every paper, I sooner or later end up with many versions of my BibTeX entries, since I introduce some abbreviations in order to squeeze the whole thing into some random page limit, or maybe I correct some things, etc. etc.

Managing my references in BibSonomy should however help me to centralize them and be consistent throughout all of my works. This is where BibLaTeX (and biber) comes into play. Using BibLaTeX, I am able to not only provide a file as input for BibTeX, but also a web URL. Luckily, BibSonomy allows us to export a reference list as bibtex using a simple webcall. For example, if I want to export the list of publications that I tagged with myown (which can be found at https://www.bibsonomy.org/user/thoni/myown) as a BibTeX file, I simply add a /bib directly after the base URL: https://www.bibsonomy.org/bib/user/thoni/myown.

Finally, I enter the following command into my LaTeX main file:

\addbibresource[location=remote]{https://www.bibsonomy.org/bib/user/thoni/myown}

and poof, I have access to all my standardized references that I worked so hard to assemble.

Have fun writing your paper, thesis, commentary... :)