<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[CloudVio]]></title><description><![CDATA[Community of cloud computing enthusiasts]]></description><link>https://blog.cloudvio.net</link><generator>RSS for Node</generator><lastBuildDate>Sat, 02 May 2026 07:40:26 GMT</lastBuildDate><atom:link href="https://blog.cloudvio.net/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Engineering a Local Serverless Stack with OpenFaaS, and MongoDB]]></title><description><![CDATA[Following up on my previous article, I continue exploring how we can recreate, within on-premises environments, the kind of services and capabilities typically powered by public clouds.
The underlying question is: what if we found ourselves in a cont...]]></description><link>https://blog.cloudvio.net/engineering-a-local-serverless-stack-with-openfaas-and-mongodb</link><guid isPermaLink="true">https://blog.cloudvio.net/engineering-a-local-serverless-stack-with-openfaas-and-mongodb</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[MongoDB]]></category><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Sat, 01 Nov 2025 11:48:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761904691137/e88c11c1-b822-45dc-8004-09e0eb01aff3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Following up on my previous article, I continue exploring how we can <strong>recreate, within on-premises environments, the kind of services and capabilities typically powered by public clouds</strong>.</p>
<p>The underlying question is: <em>what if we found ourselves in a context where we had to deliver cloud-like services such as serverless architectures, event-driven systems, etc but under strict sovereignty or regulatory constraints that prevent the use of hyperscalers?</em></p>
<p>This is the mindset driving this articles serie.</p>
<p>In this article we’ll look at <strong>how to engineer a three-tier serverless architecture</strong>, built around <strong>functions triggered by HTTP requests</strong> that perform computations or operations, and interact with a <strong>NoSQL database for data storage and retrieval</strong>.</p>
<p>The goal is to <strong>replicate a typical cloud-native stack</strong> for example, an <strong>API Gateway + AWS Lambda + DynamoDB</strong> setup but <strong>implemented entirely on-premises</strong>.</p>
<p>To bring this to life, the tools chosen to power the stack are:</p>
<ul>
<li><p><a target="_blank" href="https://www.openfaas.com/"><strong>OpenFaaS</strong></a>, serving as the equivalent of <strong>AWS Lambda</strong>,</p>
</li>
<li><p><strong>MongoDB</strong>, taking the role of <strong>DynamoDB</strong>, and</p>
</li>
<li><p>For the <strong>API Gateway</strong> layer, <strong>OpenFaaS</strong> already includes a built-in gateway <strong>NATS-based</strong>, responsible for routing and invoking functions, similar to the core behavior of AWS API Gateway. On top of that, <a target="_blank" href="https://traefik.io/traefik"><strong>Traefik</strong></a> is integrated to provide advanced capabilities such as <strong>CORS handling</strong>, <strong>host-based routing</strong>, and more flexible ingress management. Together, they effectively <strong>recreate the API Gateway’s behavior</strong> without introducing a separate service.</p>
</li>
</ul>
<p>To reproduce the setup described in this article, the <strong>main prerequisite</strong> is to have a <strong>Kubernetes cluster</strong> available.</p>
<p>As for the other requirements related to the tools and components of the stack, the necessary instructions will be provided progressively as we move through the article.</p>
<h2 id="heading-database-layer"><strong>Database Layer</strong></h2>
<p>To begin, we’ll start by deploying a <strong>MongoDB instance</strong>.</p>
<p>There are no strict requirements regarding the topology of your deployment, this will depend entirely on your specific needs. The key outcome at this stage is simply to have a <strong>working MongoDB instance</strong> that you can connect to and verify using a <strong>MongoDB client</strong> or <strong>mongosh</strong>. You should have a <strong>valid connection URI</strong> allowing you to perform the basic <strong>CRUD operations</strong> (create, read, update, delete).</p>
<p>You can use either the <strong>official MongoDB operator</strong> or the <strong>Percona operator</strong>, in my case, I chose the Percona one.</p>
<p>To deploy MongoDB using the <strong>Percona operator</strong>, you can refer to the setup guide <a target="_blank" href="https://docs.percona.com/percona-operator-for-mongodb/minikube.html">here</a> .</p>
<p>Once your MongoDB instance is <strong>up and running</strong>, and you’ve successfully tested connectivity through a valid URI with full CRUD capability, you can move on to the next step.</p>
<h2 id="heading-ingress-controller-managing-external-access-to-services"><strong>Ingress Controller — Managing External Access to Services</strong></h2>
<p>To manage external access to the services, and to configure certain aspects such as <a target="_blank" href="https://en.wikipedia.org/wiki/Cross-origin_resource_sharing"><strong>CORS</strong></a> <strong>handling</strong>, an <strong>Ingress Controller</strong> is required.</p>
<p>In my case, I chose <strong>Traefik</strong>. Beyond exposing services externally, Traefik also provides <strong>middleware-based CORS management</strong>, allowing fine-grained and declarative configuration.</p>
<p>This setup can also be achieved with <strong>NGINX</strong>, although the approach differs due to <strong>architectural differences</strong> in how each controller extends the Kubernetes Ingress API.</p>
<ul>
<li><p><strong>NGINX</strong> handles CORS through <strong>annotation-based configurations</strong>, injecting headers like Access-Control-Allow-Origin and processing OPTIONS preflight requests.</p>
</li>
<li><p><strong>Traefik</strong>, on the other hand, uses <strong>declarative Middleware Custom Resource Definitions (CRDs)</strong> to achieve the same functionality.</p>
</li>
</ul>
<p>To deploy Traefik as your Ingress Controller, you can follow the official setup guide <a target="_blank" href="https://doc.traefik.io/traefik/getting-started/install-traefik/">here</a> .</p>
<h2 id="heading-core-component-openfaas"><strong>Core Component — OpenFaaS</strong></h2>
<p>Now let’s move to the <strong>core piece</strong> of this entire stack: <strong>OpenFaaS</strong>.</p>
<p><strong>OpenFaaS®</strong> makes it simple for developers to <strong>deploy event-driven functions and microservices</strong> on Kubernetes <strong>without repetitive boilerplate code</strong>. You can package your code or even an existing binary into a <strong>Docker image</strong>, instantly turning it into a <strong>highly scalable endpoint</strong> with built-in <strong>auto-scaling</strong> and <strong>metrics</strong> support.</p>
<p>To set up OpenFaaS on Kubernetes, follow the installation guide provided <a target="_blank" href="https://docs.openfaas.com/deployment/kubernetes/">here</a>.</p>
<p>For easier management and interaction with your functions, it’s also recommended to install the <a target="_blank" href="https://docs.openfaas.com/tutorials/cli-with-node/#get-the-cli"><strong>OpenFaaS CLI</strong></a> on your workstation.</p>
<h3 id="heading-building-the-application-logic"><strong>Building the Application Logic</strong></h3>
<p>Now that the stack is ready, we can start creating the actual components.</p>
<p>Before diving into the OpenFaaS functions and configurations, let’s first look at the <strong>big picture</strong> of what we’re trying to build.</p>
<p>The goal is simple: a <strong>web page that displays the total number of visitors</strong>, updating each time the site is accessed.</p>
<p>Here’s the general flow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761847301176/d6ab185f-3678-4603-9eae-2ace34ff8433.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>When a visitor opens the web page, a small <strong>JavaScript script</strong> (running in the browser) sends an <strong>HTTP request</strong> to our function endpoint.</p>
</li>
<li><p>The function retrieves the current visitor count from the database, increments it by one, stores the new total, and returns the updated value.</p>
</li>
<li><p>The web page then displays this new count in real time.</p>
</li>
</ol>
<p>You’re free to design your web page as you wish , the frontend doesn’t matter much here. What’s important is the <strong>client-side script</strong> that interacts with the backend.</p>
<p>Below is what that script could look like:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> CONFIG = {
  <span class="hljs-attr">baseUrl</span>: <span class="hljs-string">"http://function.loacl"</span>, <span class="hljs-comment">// Replace with your Ingress host url</span>
  <span class="hljs-attr">endpoint</span>: <span class="hljs-string">"/function/visitor-counter"</span>, <span class="hljs-comment">// Single endpoint for both GET and POST</span>
  <span class="hljs-attr">maxRetries</span>: <span class="hljs-number">3</span>,
  <span class="hljs-attr">retryDelayMs</span>: <span class="hljs-number">1000</span>,
};

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">apiCall</span>(<span class="hljs-params">method = <span class="hljs-string">"GET"</span>, body = null</span>) </span>{
  <span class="hljs-keyword">const</span> url = <span class="hljs-string">`<span class="hljs-subst">${CONFIG.baseUrl}</span><span class="hljs-subst">${CONFIG.endpoint}</span>`</span>;
  <span class="hljs-keyword">const</span> options = {
    method,
    <span class="hljs-attr">headers</span>: { <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span> },
    <span class="hljs-attr">body</span>: body ? <span class="hljs-built_in">JSON</span>.stringify(body) : <span class="hljs-literal">null</span>,
  };

  <span class="hljs-keyword">let</span> attempts = <span class="hljs-number">0</span>;
  <span class="hljs-keyword">while</span> (attempts &lt; CONFIG.maxRetries) {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> fetch(url, options);
      <span class="hljs-keyword">if</span> (!response.ok) {
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">`HTTP error! Status: <span class="hljs-subst">${response.status}</span>`</span>);
      }
      <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> response.json();
    } <span class="hljs-keyword">catch</span> (error) {
      attempts++;
      <span class="hljs-keyword">if</span> (attempts &gt;= CONFIG.maxRetries) {
        <span class="hljs-built_in">console</span>.error(<span class="hljs-string">`Failed after <span class="hljs-subst">${attempts}</span> attempts: <span class="hljs-subst">${error.message}</span>`</span>);
        <span class="hljs-keyword">throw</span> error;
      }
      <span class="hljs-keyword">const</span> delay = CONFIG.retryDelayMs * <span class="hljs-built_in">Math</span>.pow(<span class="hljs-number">2</span>, attempts - <span class="hljs-number">1</span>); <span class="hljs-comment">// Exponential backoff</span>
      <span class="hljs-built_in">console</span>.warn(
        <span class="hljs-string">`Retry <span class="hljs-subst">${attempts}</span>/<span class="hljs-subst">${CONFIG.maxRetries}</span> after <span class="hljs-subst">${delay}</span>ms: <span class="hljs-subst">${error.message}</span>`</span>,
      );
      <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function">(<span class="hljs-params">resolve</span>) =&gt;</span> <span class="hljs-built_in">setTimeout</span>(resolve, delay));
    }
  }
}

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">updateAndDisplayVisitorCount</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> countElement = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"visitor-count"</span>);
  <span class="hljs-keyword">if</span> (!countElement) {
    <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"Visitor count element not found"</span>);
    <span class="hljs-keyword">return</span>;
  }

  <span class="hljs-keyword">try</span> {
    <span class="hljs-comment">// Step 1: Increment count (POST to single endpoint)</span>
    <span class="hljs-keyword">await</span> apiCall(<span class="hljs-string">"POST"</span>);

    <span class="hljs-comment">// Step 2: Fetch updated count (GET to single endpoint)</span>
    <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> apiCall(<span class="hljs-string">"GET"</span>);
    <span class="hljs-keyword">const</span> count = data.count || <span class="hljs-number">0</span>;

    <span class="hljs-comment">// Step 3: Update DOM</span>
    countElement.textContent = count;
  } <span class="hljs-keyword">catch</span> (error) {
    countElement.textContent = <span class="hljs-string">"Error loading count"</span>;
    <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"Visitor count failed:"</span>, error);
  }
}

<span class="hljs-comment">// Execute on DOM ready</span>
<span class="hljs-built_in">document</span>.addEventListener(<span class="hljs-string">"DOMContentLoaded"</span>, updateAndDisplayVisitorCount);
</code></pre>
<p><strong>Understanding the Script</strong></p>
<p>This JavaScript file is designed to be <strong>self-contained and easily portable</strong>.</p>
<p>You can include it in any HTML page regardless of your frontend framework as long as there’s an element with the ID visitor-count where the number will be displayed.</p>
<p>Here’s a breakdown of what it does:</p>
<ul>
<li><p><strong>Configuration (CONFIG)</strong></p>
<p>  Defines connection details such as the OpenFaaS gateway URL, the function endpoint, and retry behavior.</p>
</li>
<li><p><strong>apiCall() function</strong></p>
<p>  A reusable helper that performs HTTP calls (both GET and POST) to the backend function.</p>
<p>  It includes <strong>retry logic with exponential backoff</strong> to make the client more resilient to transient network errors.</p>
</li>
<li><p><strong>updateAndDisplayVisitorCount() function</strong></p>
<p>  Handles the two key operations:</p>
<ol>
<li><p>Sends a <strong>POST request</strong> to increment the visitor count in the backend.</p>
</li>
<li><p>Sends a <strong>GET request</strong> to retrieve the updated total and updates the page accordingly.</p>
<p> If anything fails, it gracefully displays an error message in the UI.</p>
</li>
</ol>
</li>
<li><p><strong>Execution on page load</strong></p>
<p>  The DOMContentLoaded listener ensures that the visitor count is updated as soon as the page finishes loading.</p>
</li>
</ul>
<p>In essence, this script acts as the <strong>frontend bridge</strong> to the backend, showing how a webpage can leverage an serverless architecture to provide real-time, data-driven behavior.</p>
<p>You can also follow along using my example by <strong>cloning my repository</strong> <a target="_blank" href="https://github.com/davWK/KNCC.git">here</a> , then navigating to the website directory, where you’ll find the source code for the web page.</p>
<p>In that directory, there’s also a simple <strong>Dockerfile</strong> that allows you to <strong>build the website image</strong> and <strong>deploy it on your Kubernetes cluster</strong>.</p>
<h2 id="heading-testing-the-frontend-deployment"><strong>Testing the Frontend Deployment</strong></h2>
<p>At this stage, you can already perform a <strong>first deployment</strong> to verify that everything works correctly on the <strong>frontend side</strong>.</p>
<p>Of course, we’re not concerned with the visitor count yet since the function hasn’t been deployed but you can still validate the website <strong>deployment process</strong>, check that the <strong>services and ingress</strong> are correctly configured, and observe how the <strong>JavaScript behaves</strong> in your browser by opening the browser’s <strong>developer tools</strong> and look at the <strong>console tab</strong> to monitor the network calls and see how the script interacts with the (currently inactive) endpoint.</p>
<p>To deploy the website, you’ll need to create a <strong>Kubernetes Deployment</strong>, a <strong>Service</strong>, and an <strong>Ingress</strong> to expose it externally. if not already created , create k8s namespace for the frontend <code>kubectl create ns &lt;namespace&gt;</code></p>
<p>Below is the example manifest used in this setup:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">website</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">website</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">website</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span> 
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">website</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">website</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">static-site:stable</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">website-service</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">website</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">website</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">website-http</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">website</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">traefik.ingress.kubernetes.io/router.entrypoints:</span> <span class="hljs-string">web</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">host-url</span>
      <span class="hljs-attr">http:</span>
        <span class="hljs-attr">paths:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
            <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
            <span class="hljs-attr">backend:</span>
              <span class="hljs-attr">service:</span>
                <span class="hljs-attr">name:</span> <span class="hljs-string">website-service</span>
                <span class="hljs-attr">port:</span>
                  <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
</code></pre>
<h3 id="heading-understanding-the-manifest"><strong>Understanding the Manifest</strong></h3>
<p>Let’s break this down piece by piece to make sure everything is clear and connected to the overall architecture.</p>
<p><strong>1. Deployment</strong></p>
<p>This section defines how the website application is deployed as pods within the cluster.</p>
<ul>
<li><p><code>replicas: 2</code> ensures <strong>two pods</strong> are running at all times for <strong>high availability</strong> and <strong>load distribution</strong>.</p>
</li>
<li><p>The containers block specifies the <strong>container image</strong> (static-site:stable) and exposes port 80.</p>
</li>
<li><p>The labels are crucial: they tie this deployment to the service definition that follows.</p>
</li>
</ul>
<p><strong>2. Service</strong></p>
<p>The service acts as a <strong>stable network abstraction</strong> over the pods.</p>
<ul>
<li><p>It selects the pods labeled <code>app: website</code> (set in the deployment).</p>
</li>
<li><p>It exposes them internally on port 80, mapping traffic to the same target port inside each pod.</p>
<p>  This makes it possible for other components such as the Ingress or other services to reach the website reliably, regardless of pod restarts or scaling events.</p>
</li>
</ul>
<p><strong>3. Ingress</strong></p>
<p>Finally, the Ingress defines <strong>external access</strong> to your site.</p>
<ul>
<li><p>The annotation <code>traefik.ingress.kubernetes.io/router.entrypoints: web</code> tells Traefik to use its HTTP entrypoint.</p>
</li>
<li><p>The rule maps the <strong>host</strong> url to the internal service (website-service).</p>
</li>
<li><p>Any request to / on that host will be routed to port 80 of the service, which then reaches one of your running pods.</p>
</li>
</ul>
<p>Once this is applied with <code>kubectl apply -f filename.yaml</code>, you can open the host URL in your browser to confirm that the frontend is served correctly.</p>
<p>Even though the visitor counter won’t show up yet, this step ensures the <strong>frontend deployment and routing</strong> are functioning as expected before we move on to the <strong>serverless function layer</strong>.</p>
<h2 id="heading-openfaas-function-creation">OpenFaaS Function creation</h2>
<p>Before jumping into the functions, let’s make things easier by exposing the OpenFaaS gateway through an ingress, just like we did for the website.</p>
<p>That way, you’ll have a clean, simple URL to access the OpenFaaS UI, and also a consistent base URL for your functions which keeps everything tidy and predictable.</p>
<p>Here’s the YAML manifest for that:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">openfaas-gateway</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">openfaas</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">traefik.ingress.kubernetes.io/router.entrypoints:</span> <span class="hljs-string">web</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">host-url</span>
      <span class="hljs-attr">http:</span>
        <span class="hljs-attr">paths:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
            <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
            <span class="hljs-attr">backend:</span>
              <span class="hljs-attr">service:</span>
                <span class="hljs-attr">name:</span> <span class="hljs-string">gateway</span>
                <span class="hljs-attr">port:</span>
                  <span class="hljs-attr">number:</span> <span class="hljs-number">8080</span>
</code></pre>
<p><strong>Apply it</strong></p>
<pre><code class="lang-bash">kubectl apply -f openfaas-ingress.yaml
</code></pre>
<p>Once it’s applied, you can open:</p>
<ul>
<li><p><strong>UI:</strong> <code>host-url/ui</code></p>
</li>
<li><p><strong>Function calls:</strong> <code>host-url/function/&lt;function-name&gt;</code></p>
</li>
</ul>
<p>This gives you a single, consistent entrypoint for both the OpenFaaS UI and your deployed functions</p>
<h3 id="heading-building-openfaas-function"><strong>Building OpenFaaS Function</strong></h3>
<p>Now that the gateway is exposed and ready to receive requests, it’s time to bring the backend logic to life the actual <strong>function</strong> that will handle the visitors count.</p>
<p>First, create a dedicated folder to organize your OpenFaaS functions.</p>
<pre><code class="lang-bash">$ mkdir -p ~/<span class="hljs-built_in">functions</span> &amp;&amp; \
  <span class="hljs-built_in">cd</span> ~/<span class="hljs-built_in">functions</span>
</code></pre>
<p>OpenFaaS engine provides buit-in templates for various languages</p>
<p>To fetch these templates, use the CLI:</p>
<pre><code class="lang-bash">$ faas-cli template pull
</code></pre>
<p>If the specific Python template is not pulled down, you can pull it directly from its repository:</p>
<pre><code class="lang-bash">$ faas-cli template pull https://github.com/openfaas/python-flask-template.git
</code></pre>
<p>Now let's scaffold a new Python function :</p>
<pre><code class="lang-bash">$ faas-cli new --lang python visitor-counter
</code></pre>
<p>After running the command, you’ll see these files:</p>
<pre><code class="lang-bash">visitor-counter/
├── handler.py
├── handler_test.py
├── requirements.txt
└── tox.ini
stack,yaml
</code></pre>
<p>Now let’s open <a target="_blank" href="http://handler.py">handler.py</a> and replace its content with the following code:</p>
<pre><code class="lang-python"><span class="hljs-comment"># handler.py</span>

<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> logging
<span class="hljs-keyword">from</span> pymongo <span class="hljs-keyword">import</span> MongoClient
<span class="hljs-keyword">from</span> pymongo.errors <span class="hljs-keyword">import</span> PyMongoError

<span class="hljs-comment"># Setup logging for observability (OpenFaaS captures stdout/stderr to Kubernetes logs)</span>
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

<span class="hljs-comment"># Decoupled config: Load from env vars (set in OpenFaaS YAML or secrets)</span>
<span class="hljs-keyword">try</span>:
    <span class="hljs-keyword">with</span> open(<span class="hljs-string">'/var/openfaas/secrets/mongo-uri'</span>, <span class="hljs-string">'r'</span>) <span class="hljs-keyword">as</span> f:
        MONGO_URI = f.read().strip()
<span class="hljs-keyword">except</span> FileNotFoundError:
    logger.error(<span class="hljs-string">"Secret file /var/openfaas/secrets/mongo-uri not found"</span>)
    MONGO_URI = <span class="hljs-literal">None</span>  <span class="hljs-comment"># Fallback; will raise in client</span>

DB_NAME = os.getenv(<span class="hljs-string">'DB_NAME'</span>, <span class="hljs-string">'visitor_db'</span>)
COLLECTION_NAME = os.getenv(<span class="hljs-string">'COLLECTION_NAME'</span>, <span class="hljs-string">'counters'</span>)
DOCUMENT_ID = os.getenv(<span class="hljs-string">'DOCUMENT_ID'</span>, <span class="hljs-string">'visitor_count'</span>)

<span class="hljs-comment"># Global client for reuse (connection pooling)</span>
client = <span class="hljs-literal">None</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_mongo_client</span>():</span>
    <span class="hljs-keyword">global</span> client
    <span class="hljs-keyword">if</span> client <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>:
        <span class="hljs-keyword">try</span>:
            client = MongoClient(MONGO_URI)
            logger.info(<span class="hljs-string">"MongoDB connection established"</span>)
        <span class="hljs-keyword">except</span> PyMongoError <span class="hljs-keyword">as</span> e:
            logger.error(<span class="hljs-string">f"MongoDB connection failed: <span class="hljs-subst">{e}</span>"</span>)
            <span class="hljs-keyword">raise</span>
    <span class="hljs-keyword">return</span> client

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">handle</span>(<span class="hljs-params">event, context</span>):</span>
    <span class="hljs-keyword">try</span>:
        client = get_mongo_client()
        db = client[DB_NAME]
        collection = db[COLLECTION_NAME]

        <span class="hljs-keyword">if</span> event.method == <span class="hljs-string">'POST'</span>:
            <span class="hljs-comment"># Atomic increment with upsert (handles initial creation reliably)</span>
            result = collection.update_one(
                {<span class="hljs-string">"_id"</span>: DOCUMENT_ID},
                {<span class="hljs-string">"$inc"</span>: {<span class="hljs-string">"count"</span>: <span class="hljs-number">1</span>}},
                upsert=<span class="hljs-literal">True</span>
            )
            <span class="hljs-keyword">if</span> result.matched_count &gt; <span class="hljs-number">0</span> <span class="hljs-keyword">or</span> result.upserted_id:
                logger.info(<span class="hljs-string">"Visitor count incremented"</span>)
                <span class="hljs-keyword">return</span> {
                    <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">200</span>,
                    <span class="hljs-string">"body"</span>: {<span class="hljs-string">"message"</span>: <span class="hljs-string">"one visitor added"</span>}
                }
            <span class="hljs-keyword">else</span>:
                <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"Increment failed"</span>)

        <span class="hljs-keyword">elif</span> event.method == <span class="hljs-string">'GET'</span>:
            <span class="hljs-comment"># Fetch count (fallback to 0 if missing, though upsert prevents this)</span>
            doc = collection.find_one({<span class="hljs-string">"_id"</span>: DOCUMENT_ID})
            count = doc.get(<span class="hljs-string">"count"</span>, <span class="hljs-number">0</span>) <span class="hljs-keyword">if</span> doc <span class="hljs-keyword">else</span> <span class="hljs-number">0</span>
            logger.info(<span class="hljs-string">f"Fetched count: <span class="hljs-subst">{count}</span>"</span>)
            <span class="hljs-keyword">return</span> {
                <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">200</span>,
                <span class="hljs-string">"body"</span>: {<span class="hljs-string">"count"</span>: count}
            }

        <span class="hljs-keyword">else</span>:
            <span class="hljs-comment"># Reliable error for unsupported methods</span>
            <span class="hljs-keyword">return</span> {
                <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">405</span>,
                <span class="hljs-string">"body"</span>: {<span class="hljs-string">"error"</span>: <span class="hljs-string">"Method not allowed"</span>}
            }

    <span class="hljs-keyword">except</span> PyMongoError <span class="hljs-keyword">as</span> e:
        logger.error(<span class="hljs-string">f"Database error: <span class="hljs-subst">{e}</span>"</span>)
        <span class="hljs-keyword">return</span> {
            <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">500</span>,
            <span class="hljs-string">"body"</span>: {<span class="hljs-string">"error"</span>: <span class="hljs-string">"Internal server error"</span>}
        }
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        logger.error(<span class="hljs-string">f"Unexpected error: <span class="hljs-subst">{e}</span>"</span>)
        <span class="hljs-keyword">return</span> {
            <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">500</span>,
            <span class="hljs-string">"body"</span>: {<span class="hljs-string">"error"</span>: <span class="hljs-string">"Internal server error"</span>}
        }
</code></pre>
<p>Now, let’s <strong>deconstruct</strong> this function carefully .</p>
<p><strong>Imports</strong>:</p>
<pre><code class="lang-plaintext">import os
import logging
from pymongo import MongoClient
from pymongo.errors import PyMongoError
</code></pre>
<ul>
<li><p>os: Provides access to environment variables and file paths.</p>
</li>
<li><p>logging: Facilitates structured output for observability; OpenFaaS redirects stdout/stderr to Kubernetes Pod logs, aiding debugging via kubectl logs.</p>
</li>
<li><p>pymongo: The official MongoDB driver for Python, PyMongoError allows targeted exception handling for database-specific failures.</p>
</li>
</ul>
<p><strong>Logging Setup</strong>:</p>
<pre><code class="lang-plaintext">logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
</code></pre>
<p>This initializes a logger at INFO level, capturing key events like connections and operations. Errors or info messages can be queried in tools like kubectl or aggregated via ELK stacks.</p>
<p><strong>Configuration Loading</strong>:</p>
<pre><code class="lang-python"><span class="hljs-keyword">try</span>:
    <span class="hljs-keyword">with</span> open(<span class="hljs-string">'/var/openfaas/secrets/mongo-uri'</span>, <span class="hljs-string">'r'</span>) <span class="hljs-keyword">as</span> f:
        MONGO_URI = f.read().strip()
<span class="hljs-keyword">except</span> FileNotFoundError:
    logger.error(<span class="hljs-string">"Secret file /var/openfaas/secrets/mongo-uri not found"</span>)
    MONGO_URI = <span class="hljs-literal">None</span>  <span class="hljs-comment"># Fallback; will raise in client</span>

DB_NAME = os.getenv(<span class="hljs-string">'DB_NAME'</span>, <span class="hljs-string">'visitor_db'</span>)
COLLECTION_NAME = os.getenv(<span class="hljs-string">'COLLECTION_NAME'</span>, <span class="hljs-string">'counters'</span>)
DOCUMENT_ID = os.getenv(<span class="hljs-string">'DOCUMENT_ID'</span>, <span class="hljs-string">'visitor_count'</span>)
</code></pre>
<p>The MONGO_URI is read from a file-mounted secret. The try-except block handles missing secrets gracefully, logging the issue while allowing controlled failure. Other values use os.getenv with defaults, enabling overrides via OpenFaaS YAML without code changes.</p>
<p><strong>Connection Management</strong>:</p>
<pre><code class="lang-python">client = <span class="hljs-literal">None</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_mongo_client</span>():</span>
    <span class="hljs-keyword">global</span> client
    <span class="hljs-keyword">if</span> client <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>:
        <span class="hljs-keyword">try</span>:
            client = MongoClient(MONGO_URI)
            logger.info(<span class="hljs-string">"MongoDB connection established"</span>)
        <span class="hljs-keyword">except</span> PyMongoError <span class="hljs-keyword">as</span> e:
            logger.error(<span class="hljs-string">f"MongoDB connection failed: <span class="hljs-subst">{e}</span>"</span>)
            <span class="hljs-keyword">raise</span>
    <span class="hljs-keyword">return</span> client
</code></pre>
<p>This implements lazy initialization with a global client for connection pooling. The MongoClient is thread-safe, supporting concurrent requests.</p>
<p><strong>Request Handling Logic</strong>:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">handle</span>(<span class="hljs-params">event, context</span>):</span>
    <span class="hljs-keyword">try</span>:
        client = get_mongo_client()
        db = client[DB_NAME]
        collection = db[COLLECTION_NAME]

        <span class="hljs-keyword">if</span> event.method == <span class="hljs-string">'POST'</span>:
            result = collection.update_one(
                {<span class="hljs-string">"_id"</span>: DOCUMENT_ID},
                {<span class="hljs-string">"$inc"</span>: {<span class="hljs-string">"count"</span>: <span class="hljs-number">1</span>}},
                upsert=<span class="hljs-literal">True</span>
            )
            <span class="hljs-keyword">if</span> result.matched_count &gt; <span class="hljs-number">0</span> <span class="hljs-keyword">or</span> result.upserted_id:
                logger.info(<span class="hljs-string">"Visitor count incremented"</span>)
                <span class="hljs-keyword">return</span> {
                    <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">200</span>,
                    <span class="hljs-string">"body"</span>: {<span class="hljs-string">"message"</span>: <span class="hljs-string">"one visitor added"</span>}
                }
            <span class="hljs-keyword">else</span>:
                <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"Increment failed"</span>)

        <span class="hljs-keyword">elif</span> event.method == <span class="hljs-string">'GET'</span>:
            doc = collection.find_one({<span class="hljs-string">"_id"</span>: DOCUMENT_ID})
            count = doc.get(<span class="hljs-string">"count"</span>, <span class="hljs-number">0</span>) <span class="hljs-keyword">if</span> doc <span class="hljs-keyword">else</span> <span class="hljs-number">0</span>
            logger.info(<span class="hljs-string">f"Fetched count: <span class="hljs-subst">{count}</span>"</span>)
            <span class="hljs-keyword">return</span> {
                <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">200</span>,
                <span class="hljs-string">"body"</span>: {<span class="hljs-string">"count"</span>: count}
            }

        <span class="hljs-keyword">else</span>:
            <span class="hljs-keyword">return</span> {
                <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">405</span>,
                <span class="hljs-string">"body"</span>: {<span class="hljs-string">"error"</span>: <span class="hljs-string">"Method not allowed"</span>}
            }

    <span class="hljs-keyword">except</span> PyMongoError <span class="hljs-keyword">as</span> e:
        logger.error(<span class="hljs-string">f"Database error: <span class="hljs-subst">{e}</span>"</span>)
        <span class="hljs-keyword">return</span> {
            <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">500</span>,
            <span class="hljs-string">"body"</span>: {<span class="hljs-string">"error"</span>: <span class="hljs-string">"Internal server error"</span>}
        }
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        logger.error(<span class="hljs-string">f"Unexpected error: <span class="hljs-subst">{e}</span>"</span>)
        <span class="hljs-keyword">return</span> {
            <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">500</span>,
            <span class="hljs-string">"body"</span>: {<span class="hljs-string">"error"</span>: <span class="hljs-string">"Internal server error"</span>}
        }
</code></pre>
<p>The entry point branches on event.method, supporting a single-handler design for simplicity. For POST, update_one with $inc ensures atomic increments (guaranteed by MongoDB at the document level), while upsert=True handles initialization without separate checks. The GET uses find_one with a safe fallback to 0. Unsupported methods return 405 explicitly. Wrapping in try-except isolates failures, returning generic 500 errors to avoid leaking details, while logging specifics for debugging.</p>
<p>In <strong>requirements.txt</strong>, specify the Python package you need:</p>
<pre><code class="lang-bash">pymongo==4.10.0
</code></pre>
<p>When the function builds, OpenFaaS will install this dependency automatically into the container.</p>
<h3 id="heading-define-the-stack-configuration"><strong>Define the Stack Configuration</strong></h3>
<p>Each OpenFaaS function is described in a YAML file, stack.yml.</p>
<p>Let’s define it clearly:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># stack.yml</span>
<span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span>
<span class="hljs-attr">provider:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">openfaas</span>
  <span class="hljs-attr">gateway:</span> <span class="hljs-string">http://127.0.0.1:8080</span>
<span class="hljs-attr">functions:</span>
  <span class="hljs-attr">visitor-counter:</span>
    <span class="hljs-attr">lang:</span> <span class="hljs-string">python3-http</span>
    <span class="hljs-attr">handler:</span> <span class="hljs-string">./visitor-counter</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">ghcr.io/davwk/visitor-counter:v1</span> <span class="hljs-comment"># Replace with your registry</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">DB_NAME:</span> <span class="hljs-string">visitor_db</span>
      <span class="hljs-attr">COLLECTION_NAME:</span> <span class="hljs-string">counters</span>
      <span class="hljs-attr">DOCUMENT_ID:</span> <span class="hljs-string">visitor_count</span>
    <span class="hljs-attr">secrets:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongo-uri</span>
    <span class="hljs-attr">requests:</span>
      <span class="hljs-attr">cpu:</span> <span class="hljs-string">100m</span>
      <span class="hljs-attr">memory:</span> <span class="hljs-string">128Mi</span>
    <span class="hljs-attr">limits:</span>
      <span class="hljs-attr">cpu:</span> <span class="hljs-string">200m</span>
      <span class="hljs-attr">memory:</span> <span class="hljs-string">256Mi</span>
</code></pre>
<ul>
<li><p><code>provider.gateway</code> → the endpoint of your OpenFaaS deployment.</p>
</li>
<li><p><code>functions</code> → list of deployed functions (you can have many).</p>
</li>
<li><p><code>lang, handler, image</code> → define how the function is built and deployed.</p>
</li>
<li><p><code>secrets</code> → links the function to the mongo-uri secret.</p>
</li>
<li><p><code>requests/limits</code> → resource boundaries for Kubernetes scheduling.</p>
</li>
</ul>
<h3 id="heading-create-the-mongodb-secret"><strong>Create the MongoDB Secret</strong></h3>
<p>Now let’s create the mongo-uri secret before deploying:</p>
<pre><code class="lang-bash">$ faas-cli secret create mongo-uri --from-literal=<span class="hljs-string">"&lt;your-mongodb-uri&gt;"</span>
</code></pre>
<p>OpenFaaS will automatically mount it under /var/openfaas/secrets/mongo-uri during runtime.</p>
<h3 id="heading-build-and-deploy"><strong>Build and Deploy</strong></h3>
<p>Build the function and deploy it to your OpenFaaS cluster:</p>
<pre><code class="lang-bash">$ faas-cli build -f ./stack.yml
$ faas-cli deploy -f ./stack.yml
</code></pre>
<p>Once the deployment completes, you can check the UI or test directly via curl:</p>
<pre><code class="lang-bash">curl -X POST baseurl/<span class="hljs-keyword">function</span>/visitor-counter -v  <span class="hljs-comment"># Expect 200 {"message": "one visitor added"}</span>
curl -X GET baseurl/<span class="hljs-keyword">function</span>/visitor-counter -v   <span class="hljs-comment"># Expect 200 {"count": N}</span>
</code></pre>
<p>Your website’s JavaScript file is responsible for sending requests to the backend function. Initially, it probably pointed to a placeholder URL. Now that your function is actually deployed and reachable through your ingress, you should <strong>update that URL if required</strong>.</p>
<p>Once you’ve updated the JS file, you need to <strong>rebuild your website image</strong> and redeploy it to Kubernetes so the changes take effect.<strong>Test the Full Stack</strong></p>
<p>Now open your browser and navigate to: <a target="_blank" href="http://website.127.0.0.1.sslip.io"><strong>yourwebsite-url</strong></a></p>
<p>When the page loads:</p>
<ul>
<li><p>The frontend JavaScript automatically triggers the <strong>POST</strong> request → your OpenFaaS function increments the counter in MongoDB.</p>
</li>
<li><p>Then it sends a <strong>GET</strong> request → retrieves the new value and displays it on the page.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761823947211/39c58cf5-5256-412a-b16b-0b01d40dc616.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-troubleshooting-cors-handling-errors"><strong>Troubleshooting CORS Handling Errors</strong></h2>
<p>If things don’t work as expected, the most likely issue you’ll face is related to <strong>CORS (Cross-Origin Resource Sharing)</strong>.</p>
<p>To confirm this, inspect the <strong>Console</strong> tab in your browser’s developer tools if you see CORS-related errors, it’s because the <strong>host URL of your website</strong> is different from the <strong>host URL of your function gateway</strong>.</p>
<p>CORS is a browser security mechanism designed to prevent JavaScript code from calling resources from other domains that might execute malicious actions or leak data. So when your frontend and function gateway don’t share the same domain or subdomain, the browser blocks the request by default.</p>
<p>To fix this issue when using <strong>Traefik</strong>, you can create a <strong>CORS middleware</strong> resource to explicitly allow requests from your website’s origin (or use a wildcard * if you’re working in a development environment).<br />So that’s all for this article , thank you for reading, and I hope it was useful to you.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">traefik.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Middleware</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">cors</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">openfaas</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">headers:</span>
    <span class="hljs-attr">accessControlAllowMethods:</span> [<span class="hljs-string">"GET"</span>, <span class="hljs-string">"POST"</span>, <span class="hljs-string">"OPTIONS"</span>]
    <span class="hljs-attr">accessControlAllowOriginList:</span> [<span class="hljs-string">"*"</span>] <span class="hljs-comment"># Wildcard for dev; restrict to your site origin (e.g., ["http://website.127.0.0.1.nip.io"]) for production</span>
    <span class="hljs-attr">accessControlAllowHeaders:</span> [<span class="hljs-string">"Content-Type"</span>]
    <span class="hljs-attr">accessControlAllowCredentials:</span> <span class="hljs-literal">false</span>
    <span class="hljs-attr">accessControlMaxAge:</span> <span class="hljs-number">100</span>
    <span class="hljs-attr">addVaryHeader:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>After applying this middleware, update your <strong>Ingress</strong> for the function gateway to reference it using the annotation below:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">traefik.ingress.kubernetes.io/router.middlewares:</span> <span class="hljs-string">openfaas-cors@kubernetescrd</span>
</code></pre>
<p>Your final Ingress definition should look like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">openfaas-gateway</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">openfaas</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">traefik.ingress.kubernetes.io/router.entrypoints:</span> <span class="hljs-string">web</span>
    <span class="hljs-attr">traefik.ingress.kubernetes.io/router.middlewares:</span> <span class="hljs-string">openfaas-cors@kubernetescrd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-comment"># rest of the definition</span>
</code></pre>
<p>Once both configurations are applied, the CORS issue should be resolved, allowing your site to communicate properly with your deployed functions.</p>
]]></content:encoded></item><item><title><![CDATA[From legacy to cloud serverless]]></title><description><![CDATA[Welcome to the third part of this serie! In this segment, we dive into testing and pipeline configuration on Google Cloud, specifically focusing on continuous integration using Cloud Build, On-demand Vulnerability Scanner, and Artifact Registry. You ...]]></description><link>https://blog.cloudvio.net/from-legacy-to-cloud-serverless-1-1</link><guid isPermaLink="true">https://blog.cloudvio.net/from-legacy-to-cloud-serverless-1-1</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703669834526/3e72b1f9-af9f-4d40-8103-375935c196ea.png" alt class="image--center mx-auto" /></p>
<p>Welcome to the third part of this serie! In this segment, we dive into testing and pipeline configuration on Google Cloud, specifically focusing on continuous integration using Cloud Build, On-demand Vulnerability Scanner, and Artifact Registry. You can find the project repository <a target="_blank" href="https://github.com/davWK/legacy-to-cloud-serverless">here</a>, or, if you prefer, you can bring your own project.</p>
<p>Let me walk you through the pipeline. With each push to the main branch, Cloud Build is triggered. First, it runs unit tests on the code. If the tests pass, it proceeds to build the image. After the image is built, Cloud Build invokes the image scanner to ensure it's free of vulnerabilities. If all is well, the image is sent and stored in the Artifact Registry, ready for deployment. But for this article, we'll focus solely on the CI part. Let's start with the tests.</p>
<h1 id="heading-unittest">Unittest</h1>
<p>Here's the code we plan to test</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask
<span class="hljs-keyword">from</span> pymongo <span class="hljs-keyword">import</span> MongoClient
<span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask, render_template, request, url_for, redirect
<span class="hljs-keyword">from</span> bson.objectid <span class="hljs-keyword">import</span> ObjectId
<span class="hljs-keyword">import</span> mongomock



app = Flask(__name__, template_folder=<span class="hljs-string">'templates'</span>)

<span class="hljs-keyword">if</span> os.environ.get(<span class="hljs-string">'TESTING'</span>):
    client = mongomock.MongoClient()
<span class="hljs-keyword">else</span>:
    client = MongoClient(os.environ[<span class="hljs-string">'MONGO_URI'</span>])


db = client.flask_db
todos = db.todos


<span class="hljs-meta">@app.route('/', methods=('GET', 'POST'))</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">index</span>():</span>
    <span class="hljs-keyword">if</span> request.method==<span class="hljs-string">'POST'</span>:
        content = request.form[<span class="hljs-string">'content'</span>]
        degree = request.form[<span class="hljs-string">'degree'</span>]
        todos.insert_one({<span class="hljs-string">'content'</span>: content, <span class="hljs-string">'degree'</span>: degree})
        <span class="hljs-keyword">return</span> redirect(url_for(<span class="hljs-string">'index'</span>))

    all_todos = todos.find()
    <span class="hljs-keyword">return</span> render_template(<span class="hljs-string">'index.html'</span>, todos=all_todos)


<span class="hljs-meta">@app.post('/&lt;id&gt;/delete/')</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">delete</span>(<span class="hljs-params">id</span>):</span>
    todos.delete_one({<span class="hljs-string">"_id"</span>: ObjectId(id)})
    <span class="hljs-keyword">return</span> redirect(url_for(<span class="hljs-string">'index'</span>))
</code></pre>
<p>For an explanation of the code, refer to the first article in this series.</p>
<p>Now, let's move on to the testing phase</p>
<p>The test is written using Python's built-in <code>unittest</code> module, which provides a framework for writing and running tests.</p>
<ol>
<li><p><strong>Import necessary modules and create a mock MongoDB instance</strong></p>
<p> The test begins by importing the necessary modules. <code>unittest</code> is the testing framework, <code>patch</code> and <code>MagicMock</code> from <code>unittest.mock</code> are used to replace parts of the system that you're testing with mock objects, and <code>ObjectId</code> from <code>bson.objectid</code> is used to create unique identifiers. The <code>app</code> and <code>todos</code> are imported from the <a target="_blank" href="http://app.py"><code>app.py</code></a> file. <code>mongomock</code> is used to create a mock MongoDB instance for testing, and <code>flask</code> is used to manipulate the request context during testing.</p>
<pre><code class="lang-python"> <span class="hljs-keyword">import</span> unittest
 <span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> patch, MagicMock
 <span class="hljs-keyword">from</span> bson.objectid <span class="hljs-keyword">import</span> ObjectId
 <span class="hljs-keyword">from</span> app <span class="hljs-keyword">import</span> app, todos
 <span class="hljs-keyword">import</span> mongomock
 <span class="hljs-keyword">import</span> flask

 mock_db = mongomock.MongoClient().db
</code></pre>
</li>
<li><p><strong>Define the test case</strong></p>
<p> A test case is defined by creating a new class that inherits from <code>unittest.TestCase</code>. This class will contain methods that represent individual tests.</p>
<pre><code class="lang-python"> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestApp</span>(<span class="hljs-params">unittest.TestCase</span>):</span>
</code></pre>
</li>
<li><p><strong>Set up the test environment</strong></p>
<p> The <code>setUp</code> method is a special method that is run before each test. Here, it's used to create a test client instance of the Flask app and enable testing mode.</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">setUp</span>(<span class="hljs-params">self</span>):</span>
     self.app = app.test_client()
     self.app.testing = <span class="hljs-literal">True</span>
</code></pre>
</li>
<li><p><strong>Write the test</strong></p>
<p> The <code>test_index_post</code> method is the actual test. It tests the behavior of the app when a POST request is sent to the index route (<code>/</code>).</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_index_post</span>(<span class="hljs-params">self</span>):</span>
</code></pre>
</li>
<li><p><strong>Mock the database operation</strong></p>
<p> The <code>patch</code> function is used to replace the <code>insert_one</code> method of <code>todos</code> with a <code>MagicMock</code>. This allows the test to simulate the behavior of the database operation without actually interacting with a real database.</p>
<pre><code class="lang-python"> <span class="hljs-keyword">with</span> patch(<span class="hljs-string">'app.todos.insert_one'</span>, new_callable=MagicMock) <span class="hljs-keyword">as</span> mock_insert_one:
</code></pre>
</li>
<li><p><strong>Create a test request context</strong></p>
<p> A test request context is created for the app using <code>app.test_request_context</code>. This allows the test to simulate a request to the app.</p>
<pre><code class="lang-python"> <span class="hljs-keyword">with</span> app.test_request_context(<span class="hljs-string">'/'</span>):
</code></pre>
</li>
<li><p><strong>Set the request method and form data</strong></p>
<p> The request method is set to 'POST' and the request form data is set to a dictionary with 'content' and 'degree' keys.</p>
<pre><code class="lang-python"> flask.request.method = <span class="hljs-string">'POST'</span>
 flask.request.form = {<span class="hljs-string">'content'</span>: <span class="hljs-string">'Test Content'</span>, <span class="hljs-string">'degree'</span>: <span class="hljs-string">'Test Degree'</span>}
</code></pre>
</li>
<li><p><strong>Send a POST request to the app</strong></p>
<p> A POST request is sent to the app using <a target="_blank" href="http://self.app.post"><code>self.app.post</code></a>. The form data is passed as the <code>data</code> argument.</p>
<pre><code class="lang-python"> result = self.app.post(<span class="hljs-string">'/'</span>, data=flask.request.form)
</code></pre>
</li>
<li><p><strong>Assert the expected results</strong></p>
<p> The <code>assertEqual</code> method is used to check that the status code of the response is 302. The <code>assert_called</code> method is used to check that the <code>insert_one</code> method was called.</p>
<pre><code class="lang-python"> self.assertEqual(result.status_code, <span class="hljs-number">302</span>)
 mock_insert_one.assert_called()
</code></pre>
</li>
</ol>
<p>This test ensures that when a POST request is sent to the index route with the correct form data, the app responds with a 302 status code and inserts the data into the database.</p>
<p>Your test code should look something like the following:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> unittest
<span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> patch, MagicMock
<span class="hljs-keyword">from</span> bson.objectid <span class="hljs-keyword">import</span> ObjectId
<span class="hljs-keyword">from</span> app <span class="hljs-keyword">import</span> app, todos
<span class="hljs-keyword">import</span> mongomock
<span class="hljs-keyword">import</span> flask

<span class="hljs-comment"># Create a mock MongoDB instance</span>
mock_db = mongomock.MongoClient().db

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestApp</span>(<span class="hljs-params">unittest.TestCase</span>):</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">setUp</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-comment"># Create a test client instance</span>
        self.app = app.test_client()
        <span class="hljs-comment"># Enable testing mode. Exceptions are propagated rather than handled by the the app's error handlers</span>
        self.app.testing = <span class="hljs-literal">True</span> 

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_index_post</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-comment"># Patch the insert_one method of todos with a MagicMock</span>
        <span class="hljs-keyword">with</span> patch(<span class="hljs-string">'app.todos.insert_one'</span>, new_callable=MagicMock) <span class="hljs-keyword">as</span> mock_insert_one:
            <span class="hljs-comment"># Create a test request context for the app</span>
            <span class="hljs-keyword">with</span> app.test_request_context(<span class="hljs-string">'/'</span>):
                <span class="hljs-comment"># Set the request method to 'POST'</span>
                flask.request.method = <span class="hljs-string">'POST'</span>
                <span class="hljs-comment"># Set the request form data</span>
                flask.request.form = {<span class="hljs-string">'content'</span>: <span class="hljs-string">'Test Content'</span>, <span class="hljs-string">'degree'</span>: <span class="hljs-string">'Test Degree'</span>}
                <span class="hljs-comment"># Send a POST request to the app</span>
                result = self.app.post(<span class="hljs-string">'/'</span>, data=flask.request.form)
                <span class="hljs-comment"># Assert that the status code of the response is 302</span>
                self.assertEqual(result.status_code, <span class="hljs-number">302</span>)
                <span class="hljs-comment"># Assert that the insert_one method was called</span>
                mock_insert_one.assert_called()
</code></pre>
<p>Now, to execute the test, set the environment variable TESTING=true, Setting <code>TESTING=True</code> switches the application to use a mock MongoDB client for testing, instead of the real MongoDB database.</p>
<p>Now, if your test is successful, let's move on to configuring Cloud Build.</p>
<h1 id="heading-cloud-build-setup">Cloud Build setup</h1>
<p>Follow the <a target="_blank" href="https://cloud.google.com/build/docs/automate-builds#connect_to_your_repository">guide</a> to connect Cloud Build to your repository and <a target="_blank" href="https://cloud.google.com/build/docs/automate-builds#create_a_trigger">this one</a> for initial configurations.</p>
<p>Once that's done, let's move on to writing the Cloud Build configuration file, where we'll instruct it on how to execute the pipeline, the steps involved, dependencies, and so on.</p>
<h1 id="heading-cloud-build-config-file">Cloud Build config file</h1>
<p>The Cloud Build Config file is written in YAML, a human-readable data serialization language.</p>
<p>Here are the main sections of our config file:</p>
<ol>
<li><p><strong>Substitutions</strong>: These are user-defined variables that can be replaced in the Cloud Build configuration file. They are defined under the <code>substitutions</code> key. In this case, <code>_REGION</code>, <code>_REPOSITORY</code>, <code>_IMAGE</code>, and <code>_SEVERITY</code> are defined.</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">substitutions:</span>
   <span class="hljs-attr">_REGION:</span> <span class="hljs-string">us-central1</span>
   <span class="hljs-attr">_REPOSITORY:</span> <span class="hljs-string">from-legacy-to-cloud</span>
   <span class="hljs-attr">_IMAGE:</span> <span class="hljs-string">from-legacy-to-cloud</span>
   <span class="hljs-attr">_SEVERITY:</span> <span class="hljs-string">'"CRITICAL|HIGH"'</span>
</code></pre>
</li>
<li><p><strong>Steps</strong>: These are the operations that Cloud Build will perform. Each step is a separate action and they are executed in the order they are defined.</p>
<ul>
<li><p><strong>Step 0: Install test dependencies</strong>: This step uses a Python 3.10 Docker image to install the test dependencies listed in <code>docker/requirements-test.txt</code>. The <code>entrypoint</code> is set to <code>/bin/bash</code>, which means that the command that follows will be executed in a bash shell. The <code>args</code> key specifies the command to be executed, which in this case is a pip install command. The <code>-c</code> flag tells bash to read commands from the following string. The <code>|</code> character allows us to write multiple commands, which will be executed in order.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">'python:3.10-slim'</span>
    <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">'/bin/bash'</span>
    <span class="hljs-attr">args:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'-c'</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">|
        pip install --user -r docker/requirements-test.txt
</span>    <span class="hljs-attr">id:</span> <span class="hljs-string">'install-test-dependencies'</span>
</code></pre>
</li>
<li><p><strong>Step 1: Run unit tests</strong>: This step also uses a Python 3.10 Docker image to run the unit tests defined in <a target="_blank" href="http://test.py"><code>test.py</code></a>. The <code>export TESTING=True</code> command sets an environment variable <code>TESTING</code> to <code>True</code>, which can be used to change the behavior of the application during testing. The <code>cd docker</code> command changes the current directory to <code>docker</code>, where the test file is located. The <code>python -m unittest</code> <a target="_blank" href="http://test.py"><code>test.py</code></a> command runs the unit tests in <a target="_blank" href="http://test.py"><code>test.py</code></a>.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">'python:3.10-slim'</span>
    <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">'/bin/bash'</span>
    <span class="hljs-attr">args:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'-c'</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">|
        export TESTING=True
        cd docker 
        python -m unittest test.py
</span>    <span class="hljs-attr">id:</span> <span class="hljs-string">'run-tests'</span>
</code></pre>
</li>
<li><p><strong>Step 2: Build the Docker image</strong>: This step uses the <code>docker</code> Cloud Builder to build a Docker image from the Dockerfile located in the <code>docker/</code> directory. The image is tagged with the commit SHA. The <code>waitFor</code> key is used to specify that this step should wait for the <code>run-tests</code> step to complete before it starts. The <code>args</code> key specifies the command to be executed, which in this case is a docker build command. The <code>-t</code> flag is used to name and optionally tag the image in the 'name:tag' format.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">'gcr.io/cloud-builders/docker'</span>
    <span class="hljs-attr">args:</span> [<span class="hljs-string">'build'</span>, <span class="hljs-string">'-t'</span>, <span class="hljs-string">'$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'</span>, <span class="hljs-string">'docker/'</span>]
    <span class="hljs-attr">waitFor:</span> [<span class="hljs-string">'run-tests'</span>]
    <span class="hljs-attr">id:</span> <span class="hljs-string">'build-image'</span>
</code></pre>
</li>
</ul>
</li>
</ol>
<ul>
<li><p><strong>Step 3: Inspect the Docker image and write the digest to a file</strong>: This step uses the <code>docker</code> Cloud Builder to inspect the Docker image and write the image digest to a file. The image digest is a unique identifier for the image. The <code>docker image inspect</code> command retrieves detailed information about the Docker image. The <code>--format</code> option is used to format the output using Go templates. The <code>{{index .RepoTags 0}}@{{.Id}}</code> template retrieves the first tag of the image and the image ID. The <code>&gt;</code> operator redirects the output to a file. The <code>&amp;&amp;</code> operator is used to execute the <code>cat</code> command only if the previous command succeeded.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">'gcr.io/cloud-builders/docker'</span>
    <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">'/bin/bash'</span>
    <span class="hljs-attr">args:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'-c'</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">|
        docker image inspect $_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA --format '{{index .RepoTags 0}}@{{.Id}}' &gt; /workspace/image-digest.txt &amp;&amp;
        cat /workspace/image-digest.txt
</span>    <span class="hljs-attr">id:</span> <span class="hljs-string">'inspect-image'</span>
</code></pre>
</li>
<li><p><strong>Step 4: Scan the Docker image for vulnerabilities</strong>: This step uses the <code>cloud-sdk</code> Cloud Builder to scan the Docker image for vulnerabilities. The scan ID is written to a file. The <code>gcloud artifacts docker images scan</code> command scans the Docker image for vulnerabilities. The <code>--format='value(response.scan)'</code> option is used to retrieve the scan ID from the response. The <code>&gt;</code> operator redirects the output to a file.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">scan</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">gcr.io/google.com/cloudsdktool/cloud-sdk</span>
    <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">/bin/bash</span>
    <span class="hljs-attr">args:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">-c</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">|
      gcloud artifacts docker images scan $_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA \
      --format='value(response.scan)' &gt; /workspace/scan_id.txt</span>
</code></pre>
</li>
<li><p><strong>Step 5: Check the severity of any vulnerabilities found</strong>: This step uses the <code>cloud-sdk</code> Cloud Builder to list the vulnerabilities found in the Docker image and check their severity. If any vulnerabilities with a severity matching <code>_SEVERITY</code> are found, the build fails. The <code>gcloud artifacts docker images list-vulnerabilities</code> command lists the vulnerabilities found in the Docker image. The <code>--format='value(vulnerability.effectiveSeverity)'</code> option is used to retrieve the severity of each vulnerability. The <code>grep -Exq $_SEVERITY</code> command checks if any of the severities match <code>_SEVERITY</code>. The <code>echo</code> command prints a message and the <code>exit 1</code> command terminates the build if a match is found.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">severity</span> <span class="hljs-string">check</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">gcr.io/google.com/cloudsdktool/cloud-sdk</span>
    <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">/bin/bash</span>
    <span class="hljs-attr">args:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">-c</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">|
      gcloud artifacts docker images list-vulnerabilities $(cat /workspace/scan_id.txt) \
      --format='value(vulnerability.effectiveSeverity)' | if grep -Exq $_SEVERITY; \
      then echo 'Failed vulnerability check' &amp;&amp; exit 1; else exit 0; fi</span>
</code></pre>
</li>
<li><p><strong>Step 6: Push the Docker image to Google Cloud Artifact Registry</strong>: This step uses the <code>docker</code> Cloud Builder to push the Docker image to the Google Cloud Artifact Registry. The <code>waitFor</code> key is used to specify that this step should wait for the <code>severity check</code> step to complete before it starts. The <code>docker push</code> command pushes the Docker image to a repository.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">'gcr.io/cloud-builders/docker'</span>
    <span class="hljs-attr">args:</span> [<span class="hljs-string">'push'</span>, <span class="hljs-string">'$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'</span>]
    <span class="hljs-attr">id:</span> <span class="hljs-string">'push-image'</span>
    <span class="hljs-attr">waitFor:</span> [<span class="hljs-string">'severity check'</span>]
</code></pre>
</li>
<li><p><strong>Images</strong>: This key specifies the Docker images that Cloud Build should build and push to the Google Cloud Artifact Registry. In this case, it's the Docker image built in Step 2.</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">images:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'</span>
</code></pre>
</li>
</ul>
<p>This <code>cloudbuild.yaml</code> file defines a complete CI/CD pipeline for our application. It installs test dependencies, runs unit tests, builds a Docker image, inspects the image, scans the image for vulnerabilities, checks the severity of any vulnerabilities found, and pushes the image to the Google Cloud Artifact Registry. This pipeline ensures that the application is tested, secure, and ready for deployment.</p>
<p>The complete config file should look like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">substitutions:</span>
  <span class="hljs-attr">_REGION:</span> <span class="hljs-string">us-central1</span>
  <span class="hljs-attr">_REPOSITORY:</span> <span class="hljs-string">from-legacy-to-cloud</span>
  <span class="hljs-attr">_IMAGE:</span> <span class="hljs-string">from-legacy-to-cloud</span>
  <span class="hljs-attr">_SEVERITY:</span> <span class="hljs-string">'"CRITICAL|HIGH"'</span>

<span class="hljs-attr">steps:</span>
<span class="hljs-comment"># Step 0: Install test dependencies</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'install-test-dependencies'</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">'python:3.10-slim'</span>
  <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">'/bin/bash'</span>
  <span class="hljs-attr">args:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">'-c'</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">|
      pip install --user -r docker/requirements-test.txt
</span>
<span class="hljs-comment"># Step 1: Run unit tests</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'run-tests'</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">'python:3.10-slim'</span>
  <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">'/bin/bash'</span>
  <span class="hljs-attr">args:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">'-c'</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">|
      export TESTING=True
      cd docker 
      python -m unittest test.py
</span>
<span class="hljs-comment"># Step 2: Build the Docker image</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'build-image'</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">'gcr.io/cloud-builders/docker'</span>
  <span class="hljs-attr">args:</span> [<span class="hljs-string">'build'</span>, <span class="hljs-string">'-t'</span>, <span class="hljs-string">'$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'</span>, <span class="hljs-string">'docker/'</span>]
  <span class="hljs-attr">waitFor:</span> [<span class="hljs-string">'run-tests'</span>]

<span class="hljs-comment"># Step 3: Inspect the Docker image and write the digest to a file.</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'inspect-image'</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">'gcr.io/cloud-builders/docker'</span>
  <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">'/bin/bash'</span>
  <span class="hljs-attr">args:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">'-c'</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">|
      docker image inspect $_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA --format '{{index .RepoTags 0}}@{{.Id}}' &gt; /workspace/image-digest.txt &amp;&amp;
      cat /workspace/image-digest.txt
</span>
<span class="hljs-comment"># Step 4: Scan the Docker image for vulnerabilities</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">scan</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">gcr.io/google.com/cloudsdktool/cloud-sdk</span>
  <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">/bin/bash</span>
  <span class="hljs-attr">args:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">-c</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">|
    gcloud artifacts docker images scan $_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA \
    --format='value(response.scan)' &gt; /workspace/scan_id.txt
</span>
<span class="hljs-comment"># Step 5: Check the severity of any vulnerabilities found</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">severity</span> <span class="hljs-string">check</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">gcr.io/google.com/cloudsdktool/cloud-sdk</span>
  <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">/bin/bash</span>
  <span class="hljs-attr">args:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">-c</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">|
    gcloud artifacts docker images list-vulnerabilities $(cat /workspace/scan_id.txt) \
    --format='value(vulnerability.effectiveSeverity)' | if grep -Exq $_SEVERITY; \
    then echo 'Failed vulnerability check' &amp;&amp; exit 1; else exit 0; fi
</span>
<span class="hljs-comment"># Step 6: Push the Docker image to Google Cloud Artifact Registry</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'push-image'</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">'gcr.io/cloud-builders/docker'</span>
  <span class="hljs-attr">args:</span> [<span class="hljs-string">'push'</span>, <span class="hljs-string">'$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'</span>]
  <span class="hljs-attr">waitFor:</span> [<span class="hljs-string">'severity check'</span>]

<span class="hljs-attr">images:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">'$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'</span>
</code></pre>
<h1 id="heading-view-build-results">View build results</h1>
<p>Now, commit and push your changes. If the Cloud Build triggers are configured correctly, the build should be triggered. Connect to the Google Cloud Console, go to Cloud Build &gt; History to view your builds.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703532434725/a960d520-d599-4e42-a904-fc2a4f3b137f.png" alt class="image--center mx-auto" /></p>
<p>If it fails, click on it to see the error messages and troubleshoot to resolve the issues. Once the build succeeds, you can access the Artifact Registry and see the stored image, ready for use.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703532650705/d24a455a-7423-4481-8b41-96f867e89a15.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-what-next">What next ?</h1>
<p>Well, that wraps up this article. In the next one, we'll delve into automating deployments—the CD part. After vulnerability scanning of container images, we'll be putting security policies in place through Binary Authorization, allowing only approved/trusted images to be deployed on Cloud Run. But before that, we'll migrate our Mongo database to Google Firestore. After that, we'll deploy our app on Cloud Run and connect it to Firestore to make it fully operational.</p>
<p>See you in the next article. Until then, I'm available on social media (I'm more active on LinkedIn) for any information or additional suggestions. Thanks for reading!</p>
]]></content:encoded></item><item><title><![CDATA[Continuous Deployment to Kubernetes with ArgoCD]]></title><description><![CDATA[Continuous deployment (CD) is the process of automatically deploying changes to production. It is a key part of the DevOps toolchain, and it can help organizations to improve their software delivery speed, reliability, and security.
ArgoCD is a Kuber...]]></description><link>https://blog.cloudvio.net/continuous-deployment-to-kubernetes-with-argocd</link><guid isPermaLink="true">https://blog.cloudvio.net/continuous-deployment-to-kubernetes-with-argocd</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<p>Continuous deployment (CD) is the process of automatically deploying changes to production. It is a key part of the DevOps toolchain, and it can help organizations to improve their software delivery speed, reliability, and security.</p>
<p>ArgoCD is a Kubernetes-native CD tool that can help you to automate the deployment of your applications to Kubernetes. It is a declarative tool, which means that you can define the desired state of your applications in a Git repository. ArgoCD will then automatically synchronize the actual state of your applications with the desired state.</p>
<p>ArgoCD is a powerful tool that can help you to improve your CD process. It is easy to use, and it can be integrated with a wide range of other tools. If you are looking for a way to automate the deployment of your applications to Kubernetes, then ArgoCD is a great option.</p>
<p>In this blog post, we will explore the process of setting up continuous integration (CI) using GitHub Actions, and then we will delve into configuring ArgoCD to handle the continuous deployment (CD) aspect.</p>
<h1 id="heading-why-argocd">Why argoCD ?</h1>
<p>For a brief overview of the benefits and reasons for using ArgoCD, I recommend checking out my LinkedIn post on the subject. In the post, I discuss the key advantages of leveraging ArgoCD and provide valuable insights into how it can enhance your deployment process. Click below to access the LinkedIn post and gain a quick understanding of why ArgoCD is a valuable tool for your software development and deployment needs</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.linkedin.com/posts/kodjovi-david-woglo_kubernetes-cicd-argocd-activity-7056054135531397120-sp9p?utm_source=share&amp;utm_medium=member_desktop">https://www.linkedin.com/posts/kodjovi-david-woglo_kubernetes-cicd-argocd-activity-7056054135531397120-sp9p?utm_source=share&amp;utm_medium=member_desktop</a></div>
<p> </p>
<h1 id="heading-requirements">Requirements</h1>
<ul>
<li><p>Installed <code>kubectl</code> command-line tool.</p>
</li>
<li><p>Have a Kubernetes cluster and a <code>kubeconfig</code> file. The default location for the <code>kubeconfig</code> file is <code>~/.kube/config</code>. If you don't have a Kubernetes cluster set up, you can follow this <a target="_blank" href="https://minikube.sigs.k8s.io/docs/start/">guide</a> to quickly bootstrap Minikube.</p>
</li>
<li><p>A GitHub account.</p>
</li>
</ul>
<h1 id="heading-setting-up-continuous-integration-ci-using-github-actions">Setting Up Continuous Integration (CI) Using GitHub Actions</h1>
<p>For this activity, we will use a simple web application written in Python and utilizing Flask. The application has been specifically designed with cloud demonstrations and containers in mind.</p>
<p>To obtain the application code, you can fork this <a target="_blank" href="https://github.com/davWK/argoCD-demo.git">Github repository</a> to your own Github account and then clone it to your local machine to start making changes and customizations as needed.</p>
<p>To create the workflows instruction for GitHub Actions, you'll need to create a YAML file following a specific structure. Start by creating a file named <code>main.yml</code> inside the <code>.github/workflows</code> directory of your repository. This file will serve as the configuration file for workflows. By following this standardized structure, you'll be able to define and customize the actions, triggers, and steps that make up your CI/CD pipeline.</p>
<p>Let's start the workflow configuration with the following structure:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">ArgoCD</span> <span class="hljs-string">demo</span> <span class="hljs-string">Build</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"main"</span>
  <span class="hljs-attr">pull_request:</span>
</code></pre>
<p>In this configuration, we've named the workflow as "ArgoCD Demo Build". It will be triggered on both push events to the "main" branch and pull requests. The workflow will run on an "ubuntu-latest" virtual machine. This setup forms the foundation of the workflow.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">test:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">'Test'</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> 
        <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>

      <span class="hljs-bullet">-</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">make</span> <span class="hljs-string">test</span>
</code></pre>
<p>Above, we define a job called "Test" that will run on the latest Ubuntu environment (<code>ubuntu-latest</code>).</p>
<ol>
<li><p>The "Checkout" step ensures that the repository's code is available by using the <code>actions/checkout@v2</code> action.</p>
</li>
<li><p>The "Run tests" step executes the command <code>make test</code> to run the tests.</p>
</li>
</ol>
<pre><code class="lang-yaml">  <span class="hljs-attr">build:</span>  
    <span class="hljs-attr">name:</span> <span class="hljs-string">'Build &amp; Push to Docker Hub'</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">needs:</span> <span class="hljs-string">test</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> 
        <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>

      <span class="hljs-bullet">-</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">Login</span> <span class="hljs-string">to</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Hub</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/login-action@v2</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_USERNAME</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_TOKEN</span> <span class="hljs-string">}}</span>
      <span class="hljs-bullet">-</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Buildx</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/setup-buildx-action@v2</span>
      <span class="hljs-bullet">-</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">push</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/build-push-action@v4</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">context:</span> <span class="hljs-string">.</span>
          <span class="hljs-attr">file:</span> <span class="hljs-string">./Dockerfile</span>
          <span class="hljs-attr">push:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">tags:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_USERNAME</span> <span class="hljs-string">}}/image-name:tag</span>
</code></pre>
<p>The next job is "Build &amp; Push to Docker Hub," which also runs on the <code>ubuntu-latest</code> environment.</p>
<ol>
<li><p>The "Checkout" step ensures that the repository's code is available by using the <code>actions/checkout@v2</code> action.</p>
</li>
<li><p>The "Login to Docker Hub" step authenticates with Docker Hub using the credentials that should be defined in the repository secrets in the GitHub repository settings.</p>
</li>
<li><p>The "Set up Docker Buildx" step uses the <code>docker/setup-buildx-action@v2</code> action to set up Docker Buildx for building the Docker image.</p>
</li>
<li><p>Finally, the "Build and push" step uses the <code>docker/build-push-action@v4</code> action to build the Docker image based on the specified <code>Dockerfile</code> and push it to Docker Hub. Make sure to modify the <code>tags</code> field to match your desired image name and version. And also add credentials to the repository secret before moving on.</p>
</li>
</ol>
<p>Once everything is in place, you can initiate the workflow by pushing your changes to the repository. This action will automatically trigger the workflow to start. To monitor and gain insights into the workflow execution, navigate to the "Actions" tab in GitHub. Here, you'll be able to view the workflow status, check the progress of each step, and identify any errors encountered. If any issues arise, carefully review the error messages provided and make the necessary fixes before proceeding to the next part, which involves setting up the continuous deployment (CD) using ArgoCD.</p>
<h1 id="heading-setting-up-continuous-deployment-cd-with-argocd">Setting Up Continuous Deployment (CD) with ArgoCD</h1>
<p>In this section, we will explore the process of setting up continuous deployment (CD) using ArgoCD. Building upon the foundation of continuous integration (CI) we established earlier with GitHub Actions, we will now focus on automating the deployment of our application to Kubernetes cluster.</p>
<p>You have the flexibility to utilize any Kubernetes (k8s) cluster at your disposal, whether it's a cloud-based cluster, a bare-metal setup, or even local environments such as Minikube or MicroK8s. ArgoCD is compatible with various Kubernetes configurations, allowing you to seamlessly integrate it into your preferred infrastructure. This versatility enables you to leverage your existing infrastructure or choose a setup that best suits your needs for continuous deployment (CD) with ArgoCD.</p>
<p>To proceed further, we will be utilizing Minikube for our setup. Minikube provides a convenient and lightweight way to run a single-node Kubernetes cluster locally.</p>
<p>Now, let's proceed with the installation of ArgoCD. We will walk through the steps to set up ArgoCD on your chosen Kubernetes cluster, in this case, Minikube.</p>
<h2 id="heading-installing-argocd">Installing ArgoCD</h2>
<p>To install ArgoCD on your Kubernetes cluster, execute the following commands:</p>
<pre><code class="lang-bash">kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>The first command creates a namespace called "argocd" where ArgoCD will be installed. The second command applies the ArgoCD installation manifest, which can be accessed from the official ArgoCD GitHub repository. By executing these commands, you will initiate the installation process and set up ArgoCD within your cluster.</p>
<p>Once the installation is completed, you can verify the installation status by running the following command:</p>
<pre><code class="lang-bash">kubectl get nodes -n argocd
</code></pre>
<p>This command will display the nodes in the "argocd" namespace, confirming that ArgoCD is successfully installed.</p>
<p>To access the ArgoCD web interface, you can use kubectl port-forwarding to connect to the API server. Execute the following command:</p>
<pre><code class="lang-bash">kubectl port-forward svc/argocd-server -n argocd 8080:443
</code></pre>
<p>This command will create a port-forwarding tunnel, allowing you to access the ArgoCD UI locally at <a target="_blank" href="https://localhost:8080"><code>https://localhost:8080</code></a>. Simply open a web browser and navigate to the provided URL to access the ArgoCD interface.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684423492405/5890283d-f639-40ca-9a07-10dfd0089545.png" alt class="image--center mx-auto" /></p>
<p>To log in to the ArgoCD UI, you will need to retrieve the password from the <code>argocd-initial-admin-secret</code> secret. Follow these steps:</p>
<ol>
<li>Retrieve the secret by executing the following command:</li>
</ol>
<pre><code class="lang-bash">kubectl get secret argocd-initial-admin-secret -n argocd -o yaml
</code></pre>
<ol>
<li><p>The output will include a field called <code>data</code>, which contains the base64-encoded password. Copy the value associated with the <code>password</code> key.</p>
</li>
<li><p>Decode the password using the <code>echo</code> and <code>base64</code> commands. Replace <code>encodedpassword</code> in the command below with the copied value:</p>
</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> encodedpassword | base64 --decode
</code></pre>
<ol>
<li><p>The decoded password will be displayed in the terminal. Copy the password string.</p>
</li>
<li><p>Return to the ArgoCD UI login page. Enter <code>admin</code> as the username and paste the decoded password into the password field.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684423544393/70eb07ea-33f3-4e43-b23e-d01bd648b7eb.png" alt class="image--center mx-auto" /></p>
<p>Currently, ArgoCD is empty as we haven't configured any applications yet. Let's proceed with configuring ArgoCD to connect to a GitHub repository where our deployment files will be hosted.</p>
<blockquote>
<p><mark>It's important to note that in best practices, it is recommended to separate the application repository from the deployment repository. However, for the purpose of this activity, we will keep the deployment files alongside the application files. Please keep in mind that this is not a recommended practice for production-ready environments. In such scenarios, it is crucial to separate the two repositories to ensure a more organized and manageable deployment workflow</mark>.</p>
</blockquote>
<h2 id="heading-configuring-argocd">Configuring ArgoCD</h2>
<p>To configure ArgoCD to connect to your GitHub repository and deploy your application,</p>
<ol>
<li>Create a YAML file, such as <code>argocd-config.yaml</code>, and add the following content:</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">argo-cd-demo</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>

  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">https://github.com/davWK/argoCD-demo.git</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">HEAD</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">deploy/kubernetes/</span>
  <span class="hljs-attr">destination:</span> 
    <span class="hljs-attr">server:</span> <span class="hljs-string">https://kubernetes.default.svc</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">demo-app-for-argo-cd</span>

  <span class="hljs-attr">syncPolicy:</span>
    <span class="hljs-attr">automated:</span>
      <span class="hljs-attr">selfHeal:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">prune:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>Now, let's break down what each section of the YAML file does:</p>
<ul>
<li><p><code>metadata</code>: Specifies the metadata for the ArgoCD application, including its name and namespace.</p>
</li>
<li><p><code>spec.project</code>: Specifies the project within ArgoCD where the application belongs. In this case, it is set to the default project.</p>
</li>
<li><p><code>source</code>: Defines the source repository details:</p>
<ul>
<li><p><code>repoURL</code>: Specifies the URL of the GitHub repository where your application's deployment files are hosted.</p>
</li>
<li><p><code>targetRevision</code>: Specifies the target revision of the repository to deploy. Here, it is set to <code>HEAD</code>, meaning the latest revision.</p>
</li>
<li><p><code>path</code>: Specifies the path within the repository where your application's Kubernetes deployment files are located. the path ArgoCD will track for any modification</p>
</li>
</ul>
</li>
<li><p><code>destination</code>: Specifies the destination details for the deployment:</p>
<ul>
<li><p><code>server</code>: Specifies the URL of the Kubernetes API server. Here, it is set to <a target="_blank" href="https://kubernetes.default.svc"><code>https://kubernetes.default.svc</code></a>. It can be an external cluster</p>
</li>
<li><p><code>namespace</code>: Specifies the target namespace in which the application will be deployed. In this case, it is set to <code>demo-app-for-argo-cd</code>.</p>
</li>
</ul>
</li>
<li><p><code>syncPolicy</code>: Defines the synchronization policy for the application:</p>
<ul>
<li><p><code>automated</code>: Specifies that the synchronization should be automated, enabling self-healing and pruning capabilities.</p>
<ul>
<li><p><code>selfHeal</code>: Enables self-healing, ensuring the application stays in the desired state.</p>
</li>
<li><p><code>prune</code>: Enables pruning, removing any resources that are no longer defined in the deployment files.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<ol>
<li>Save the file and apply the configuration by running the following command:</li>
</ol>
<pre><code class="lang-bash">kubectl apply -n argocd -f argocd-config.yaml
</code></pre>
<p>By applying this configuration, ArgoCD will establish a connection to the specified GitHub repository, fetch the deployment files from the specified path, and deploy the application to the designated namespace within the Kubernetes cluster.</p>
<p>Once you apply the configuration using the command <code>kubectl apply -n argocd -f argocd-config.yaml</code>, you will no longer need to manually apply any changes to your Kubernetes files. ArgoCD takes over the responsibility of tracking and applying changes automatically.</p>
<p>After the initial deployment, ArgoCD continuously monitors the specified GitHub repository and the Kubernetes files within it. Whenever there is a change detected in the repository, ArgoCD will automatically apply those changes to your Kubernetes cluster. This ensures that your application remains up-to-date with the latest version defined in the repository.</p>
<p>With ArgoCD in place, you can focus on making changes to your application's deployment files in the repository, and ArgoCD will handle the synchronization and deployment to the Kubernetes cluster for you. This simplifies the deployment process and provides a seamless experience for maintaining the desired state of your applications.</p>
<h2 id="heading-writing-the-python-app-deployment-file-for-kubernetes">Writing the python app deployment file for kubernetes</h2>
<p>At this stage, the configuration will be created in ArgoCD, but no application pods or services will be available. This is because we have not yet defined the Kubernetes deployment manifest that contains the deployment information for our Python demo app. However, once this manifest is in place, ArgoCD will automatically apply it, resulting in the deployment of the application.</p>
<p>To proceed, you need to create the Kubernetes deployment manifest file that describes the desired state of your application, such as the container image, ports, and any other necessary configurations. Once you have the deployment manifest ready, commit and push it to your GitHub repository.</p>
<p>ArgoCD will then detect the changes in the repository and automatically apply the deployment manifest, triggering the creation of the corresponding pods and services. This automatic synchronization ensures that the deployed application aligns with the desired state defined in the deployment manifest.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684426776959/0520e2ba-7f65-413b-a6b8-e64459e205f2.png" alt class="image--center mx-auto" /></p>
<p>To proceed with defining the Kubernetes deployment manifest for the Python demo app:</p>
<ol>
<li><p>Inside the <code>deploy/kubernetes</code> directory, create a new <code>deployment.yaml</code>.</p>
</li>
<li><p>Open the <code>deployment.yaml</code> file and add the following content:</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">python-app-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">python-app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">python-app</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">python-app</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">image-name</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">imageurl</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">5000</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">python-app-service</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">NodePort</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">python-app</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">5000</span>
      <span class="hljs-attr">nodePort:</span> <span class="hljs-number">30000</span>
</code></pre>
<ol>
<li>Save the file.</li>
</ol>
<p>This deployment manifest defines a Kubernetes Deployment and Service for the Python app. It specifies the container image, ports, replicas, and other necessary configurations.</p>
<ul>
<li><p>The Deployment creates three replicas of the Python app pods.</p>
</li>
<li><p>The Service exposes the app using a NodePort type, making it accessible on port 30000 of the cluster nodes.</p>
</li>
</ul>
<p>Commit and push the <code>deployment.yaml</code> file to your GitHub repository. ArgoCD will automatically detect the changes and apply the deployment manifest, leading to the creation of the Python app deployment and service.</p>
<p>Once the synchronization is complete, you should see the app pods running and the service available for access.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684428672125/5d92e1ee-4d40-4e86-a2c1-f37fc968ae95.png" alt class="image--center mx-auto" /></p>
<p>To access the deployed Python app:</p>
<ol>
<li>Run the following command to get the service information:</li>
</ol>
<pre><code class="lang-bash">kubectl get svc -n &lt;namespace <span class="hljs-keyword">for</span> the Python app&gt;
</code></pre>
<p>Replace <code>&lt;namespace for the Python app&gt;</code> with the actual namespace where your Python app is deployed. This command will provide you with the details of the service, including its name, type, cluster IP, and port.</p>
<ol>
<li>Once you have the service information, run the following command to set up port forwarding:</li>
</ol>
<pre><code class="lang-bash">kubectl port-forward -n &lt;namespace <span class="hljs-keyword">for</span> the Python app&gt; svc/python-app-service 8083:&lt;service port&gt;
</code></pre>
<p>Replace <code>&lt;namespace for the Python app&gt;</code> with the actual namespace where your Python app is deployed, and <code>&lt;service port&gt;</code> with the port number specified in your service configuration (e.g., 80).</p>
<p>This command establishes a connection between your local machine and the Python app service running in the Kubernetes cluster. It forwards traffic from your local port 8083 to the specified service port.</p>
<ol>
<li>Now, you can access the deployed Python app by opening a web browser and navigating to <a target="_blank" href="http://localhost:8083"><code>http://localhost:8083</code></a>. This will direct your requests to the Python app service running in the Kubernetes cluster.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684429881758/879f296b-209e-49b5-8a80-8448cfe0484f.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In conclusion, setting up Continuous Integration (CI) and Continuous Deployment (CD) processes are crucial for efficient software development and deployment. In this article, we explored the steps to configure CI using GitHub Actions and CD with ArgoCD. By integrating these tools into your workflow, you can automate the build, test, and deployment processes, leading to faster and more reliable software delivery.</p>
<p>To learn more about ArgoCD and its capabilities, you can refer to the official ArgoCD documentation available <a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/getting_started/">here</a>. The documentation provides comprehensive information, including installation guides, usage examples, and advanced configurations.</p>
<p>For a practical demonstration and understanding of ArgoCD, you can watch the "ArgoCD tutorial" on YouTube by TechWorld with Nana.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/MeU5_k9ssrs">https://youtu.be/MeU5_k9ssrs</a></div>
<p> </p>
<p>To grasp the concept of GitHub Actions and its integration with CI/CD processes, you can watch the "GitHub Action Tutorial" video by TechWorld with Nana. This video explains the fundamentals and basic concepts of GitHub Actions.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/R8_veQiYBjI">https://youtu.be/R8_veQiYBjI</a></div>
<p> </p>
<p>Thanks for reading I hope you found the information helpful and informative. If you have any questions or comments, please feel free to reach out to me or leave a comment below.</p>
]]></content:encoded></item><item><title><![CDATA[The Kubernetes Resume Challenge: Extra credit]]></title><description><![CDATA[Well, it's been a while since the first part of this article before I'm releasing this last part today. Simply because I've been a bit busy lately, but also because I knew nothing about Helm , all I knew about Helm was that it is is used to package K...]]></description><link>https://blog.cloudvio.net/the-kubernetes-resume-challenge-extra-credit</link><guid isPermaLink="true">https://blog.cloudvio.net/the-kubernetes-resume-challenge-extra-credit</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<p>Well, it's been a while since the first part of this article before I'm releasing this last part today. Simply because I've been a bit busy lately, but also because I knew nothing about Helm , all I knew about Helm was that it is is used to package Kubernetes applications, that's all. So this break time, apart from the different things I had to do, I took it to learn Helm in fact, the concept behind it, how to use it, etc.</p>
<p>As you already know, to stay true to the spirit of the challenge and to keep it challenging, this article will not be a detailed step-by-step implementation guide, but rather a post about steps, decisions made, and how challenges were overcome. You can still refer to the documentation on github for something more or less technical, but it is still highly recommended to do everything from scratch to learn properly.</p>
<p>So, the remaining steps were the extra credit: Package Everything in Helm, Implement Persistent Storage, and Implement CI/CD Pipeline. Let's see how I tackled them.</p>
<h1 id="heading-package-everything-in-helm">Package Everything in Helm</h1>
<p>As I mentioned earlier, at this stage my challenge was clear: I HAD TO LEARN HELM, not just what was needed for the challenge but also to go a bit into the details, in order to get a global and deep understanding of the thing in order to know how to effectively guide my choices in the implementation.</p>
<p>My Helm directory is therefore presented as follows:</p>
<p>Basicly at the root we have the files:</p>
<ul>
<li><p><code>values.yaml</code> is a file in a Helm chart directory that allows you to set the values of configurable parameters in your Helm chart. It's a way to provide default configuration values for a chart. These values can be overridden by users during the installation of the chart or when upgrading a release.</p>
</li>
<li><p><code>chart.yaml</code> is a mandatory file in a Helm chart directory. It contains basic information about the chart.</p>
</li>
<li><p>And the <code>'templates'</code> directory  contains files that will generate Kubernetes manifest files when the chart is installed. It contains the actual Kubernetes manifests with placeholders that will be populated by the values set in the values.yaml file, if these values are not overwritten during installation or upgrade.</p>
</li>
</ul>
<p>You can visit the official Helm documentation to go into depth, or quickly understand the concepts with <a target="_blank" href="https://youtu.be/-ykwb1d0DXU?si=_gd1CknFsAE5f0Ji">this TWN tutorial</a>.</p>
<h1 id="heading-implement-persistent-storage">Implement Persistent Storage</h1>
<p>The goal here is to persist the data stored in the database and the changes made, so that if the deployments are deleted or redeployed, the data is still available.</p>
<p>To achieve this, I needed to create a Kubernetes PersistentVolumeClaim resource and associate it with the database deployment to mount the database's data storage location (/var/lib/mysql). This is where the database data and the data loaded by the initialization script are stored.</p>
<p>While this can be done without much hassle in an interactive or manual way (e.g., using <code>kubectl apply</code>), the challenge arises when installing with Helm. For the data to be loaded onto this persistent volume, the PVC must be deployed first. Otherwise, there will be errors and the pods will be stuck and won't reach the Ready state. To address this, I utilized a Helm feature called <strong>Helm hooks</strong>.</p>
<p>Helm hooks are a mechanism to intervene at certain points in a release's lifecycle. They allow you to provide Helm with instructions to perform specific operations based on Helm's lifecycle events, such as install, upgrade, delete, etc. I leveraged this in my case to specify that this PersistentVolumeClaim should be created before the Helm chart is installed or upgraded.</p>
<pre><code class="lang-yaml"><span class="hljs-string">...</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">mariadb-pvc</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">"helm.sh/hook":</span> <span class="hljs-string">pre-install,pre-upgrade</span>
    <span class="hljs-attr">"helm.sh/hook-weight":</span> <span class="hljs-string">"0"</span>
<span class="hljs-string">...</span>
</code></pre>
<ul>
<li><p><code>"</code><a target="_blank" href="http://helm.sh/hook"><code>helm.sh/hook</code></a><code>": pre-install,pre-upgrade</code> means that this resource will be managed separately from the rest of the chart's resources. It will be created before the rest of the chart's resources during an install or upgrade.</p>
</li>
<li><p><code>"</code><a target="_blank" href="http://helm.sh/hook-weight"><code>helm.sh/hook-weight</code></a><code>": "0"</code> is used to control the execution order of hooks. Hooks with lower weights will be executed before hooks with higher weights. In this case, the PersistentVolumeClaim will be one of the first resources created, as it has a weight of 0.</p>
</li>
</ul>
<h1 id="heading-implement-cicd-pipeline">Implement CI/CD Pipeline</h1>
<p>Let's talk about the most fun part, shall we? The roller coaster of CI/CD, haha! Those endless failed jobs before finally seeing a glimmer of hope in the form of a green status, which was only achieved by painstakingly commenting out parts of the workflow file to troubleshoot each step individually. And then, as soon as one step is fixed, the red errors start popping up again, and the cycle repeats until all the steps are fixed and we have a functional pipeline. (It's quite exhilarating, really.)</p>
<p>Enough of that. When setting up this pipeline with GitHub Actions, I wanted to adopt a new Google Cloud security feature to authenticate my workflow when it accesses GCP resources. Previously, we used service account keys, which, if compromised, could put our GCP resources at risk. This is because with service account keys, there's no way to verify who or what is using the keys. So, anyone can access the resources if they get their hands on the keys.</p>
<p><img src="https://github.com/google-github-actions/auth/raw/main/docs/google-github-actions-auth-workload-identity-federation-through-service-account.svg" alt="Authenticate to Google Cloud from GitHub Actions with Workload Identity Federation through a Service Account" /></p>
<p>To address this, Google Cloud has introduced Workload Identity Federation, a more secure way to grant access to GCP resources from outside GCP. Workload Identity Federation allows you to authenticate the entity that will use the service account to access the resources. And in this case, the tokens generated are not long-lived but rather short-lived, just for the duration of the operation. So, even if the token is leaked, it will not be valid for accessing GCP resources.</p>
<p>For setting up Workload Identity Federation, refer to <a target="_blank" href="https://cloud.google.com/iam/docs/workload-identity-federation-with-deployment-pipelines">this</a>.</p>
<h1 id="heading-projects-github-repositoryhttpsgithubcomdavwkk8s-resume-challengegit"><a target="_blank" href="https://github.com/davWK/k8s-Resume-Challenge.git">Project's Github repository</a></h1>
]]></content:encoded></item><item><title><![CDATA[My Kubernetes Resume Challenge]]></title><description><![CDATA[Given the impact and aftermath of the Cloud Resume Challenge (CRC) and the demand from individuals for a Kubernetes Challenge, Forrest Brazeal in collaboration with KodeKloud has launched a spin-off of the CRC, focusing on Kubernetes. The Kubernetes ...]]></description><link>https://blog.cloudvio.net/my-kubernetes-resume-challenge</link><guid isPermaLink="true">https://blog.cloudvio.net/my-kubernetes-resume-challenge</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<p>Given the impact and aftermath of the Cloud Resume Challenge (CRC) and the demand from individuals for a Kubernetes Challenge, <a target="_blank" href="https://forrestbrazeal.com/">Forrest Brazeal</a> in collaboration with <a target="_blank" href="https://kodekloud.com/">KodeKloud</a> has launched a spin-off of the CRC, focusing on Kubernetes. The Kubernetes Resume Challenge aims to highlight proficiency in Kubernetes and containerization, demonstrating the ability to deploy, scale, and manage web applications efficiently in a Kubernetes environment, emphasizing cloud-native deployment skills. As is customary, CRCs are not undertaken without publishing an article detailing the process and how the challenge unfolded. This article serves that purpose. If this is your first encounter with the CRC, I invite you to click <a target="_blank" href="https://cloudresumechallenge.dev/">here</a> to learn more. For insights into my previous CRC experiences, particularly with GCP and AWS, I encourage you to read those articles. Without further ado, let's delve into how I approached taking this version of the CRC: the <a target="_blank" href="https://cloudresumechallenge.dev/docs/extensions/kubernetes-challenge/">Kubernetes Resume Challenge</a></p>
<p>The first phase is the certification phase, or as I interpret it, the phase of acquiring knowledge and skills, because the goal is to ensure you have a solid understanding of Kubernetes concepts and practical experience. For this purpose, it is recommended in the challenge to complete the <a target="_blank" href="https://www.kodekloud.com/p/kubernetes-certification-course">Certified Kubernetes Application Developer (CKAD)</a> course by KodeKloud. Personally, I have already been playing with Kubernetes for a while. Therefore, I started the challenge directly. Apart from what is recommended here to gain an understanding of Kubernetes concepts and practical experience, I personally recommend <a target="_blank" href="https://youtu.be/s_o8dwzRlu4?si=J6nHHD3atC3bsTmW">Techworld with Nana's Kubernetes crash course</a>. I really appreciate the clarity and the ability that Nana has to easily digest and present complex topics in an interesting way. Take a look at her crash course.</p>
<p>Now, the hands-on part begins with containerization, containerizing both the application and the database. Containerizing the application is fairly straightforward by following the instructions provided on the CRC website. However, where one needs to focus attention is on interfacing with the database. When dealing with the database, there's no need for containerization since the official Docker image of MariaDB is already available and ready to use. The minor adjustment required here is to load the data into the database using the initialization script via a Kubernetes ConfigMap. This, I believe, can be interactively done through a command resembling something like <code>kubectl create configmap db-init-script --from-file=db-load-script.sql</code>. However, I've developed a habit of configuring almost all Kubernetes resources declaratively. Therefore, I created a manifest for the ConfigMap, not just for this purpose but for all the resources I needed to deploy right from the early stage.</p>
<p>Before exposing the application, it's essential to make the underlying adjustments, namely, initializing the passwords to connect to the database, the connection string to the database, and the service to allow the connection, etc. Therefore, to initialize the root password of the database, I created, of course, a Kubernetes secret containing the base64-encoded value of the password. Then, I referenced it in the environment variables of the MariaDB deployment manifest. It should look something like this in the MariaDB deployment manifest:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">env:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_ROOT_PASSWORD</span>
    <span class="hljs-attr">valueFrom:</span>
      <span class="hljs-attr">secretKeyRef:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">mariadb-root-password</span>
        <span class="hljs-attr">key:</span> <span class="hljs-string">password</span>
</code></pre>
<p>This initializes the root password upon launching the database. Then, to establish communication between the application and the database, it's necessary to create the MariaDB service and add it, along with the service and the secret, to the environment variables of the application so that this information is used to initiate the connection to the database when the application starts. It will look something like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">env:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_HOST</span>
    <span class="hljs-attr">value:</span> <span class="hljs-string">mariadb-service</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_USER</span>
    <span class="hljs-attr">value:</span> <span class="hljs-string">root</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_PASSWORD</span>
    <span class="hljs-attr">valueFrom:</span>
      <span class="hljs-attr">secretKeyRef:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">mariadb-root-password</span>
        <span class="hljs-attr">key:</span> <span class="hljs-string">password</span>
</code></pre>
<p>Next, to expose the application on the internet, I utilized a Load Balancer service type that I associated with the deployment. In my case, the Kubernetes environment I used is GKE on Google Cloud. Therefore, I needed to reserve a static public IP address, which I used for the Load Balancer. Then, I attached the service to the application using the app selector. Once all of this was applied, the application was ready to serve on the internet.</p>
<p>Next comes the second chunk of the project. The first part of this second chunk involved setting up the feature toggle using a ConfigMap. I'll explain how I proceeded. The goal of the feature toggle is to activate the dark mode on the site, so it was necessary to create another CSS file for the dark mode. Then, in the PHP code, I created a condition stating that if the environment variable FEATURE_DARK_MODE = true, then to use the dark mode CSS file, otherwise to use the default file. To make this environment variable value known to the application, I created the ConfigMap:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">feature-toggle-config</span>
<span class="hljs-attr">data:</span>
  <span class="hljs-attr">FEATURE_DARK_MODE:</span> <span class="hljs-string">"true"</span>
</code></pre>
<p>And referenced it in the application deployment manifest in the environment section:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">env:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">FEATURE_DARK_MODE</span>
    <span class="hljs-attr">valueFrom:</span>
      <span class="hljs-attr">configMapKeyRef:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">feature-toggle-config</span>
        <span class="hljs-attr">key:</span> <span class="hljs-string">FEATURE_DARK_MODE</span>
</code></pre>
<p>With this, once the ConfigMap was applied, the dark mode was activated on the site. Then, it was about manually scaling the application by increasing the number of replicas, and then performing a rolling update with a new version of the application, including a promotion banner on the site, and finally rolling back to the initial state, all as indicated in the instructions on the CRC website.</p>
<p>Now, it was about implementing autoscaling so that the pods scale up to a maximum of 10 if their CPU usage exceeds 50%. I created an autoscaling resource manifest including the autoscaling requirements. Of course, this can be done interactively as indicated in the CRC guide, but as I mentioned earlier, I prefer to set up my configs declaratively whenever possible to benefit from reusability. To simulate the load, I used the lightweight and easy-to-use load testing and benchmarking tool Siege.</p>
<p>Now place to <strong>Liveness and Readiness Probes</strong> which are Kubernetes mechanisms designed to check the health of a Pod.</p>
<ol>
<li><p><strong>Liveness Probe</strong>: This probe verifies whether your application is running properly. If the liveness probe fails, Kubernetes will terminate the Pod and create a new one as a replacement. This feature is particularly useful if your application has deadlocked and cannot recover without a restart.</p>
<p> In my configuration, the liveness probe is configured to perform an HTTP GET request to the <code>/live.php</code> endpoint on port 80 of my pod. I have created another PHP file (<code>live.php</code>) in the same location as <code>index.php</code>.</p>
</li>
</ol>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>
http_response_code(<span class="hljs-number">200</span>);
<span class="hljs-keyword">echo</span> <span class="hljs-string">"Application is running"</span>;
</code></pre>
<p>The probe will start 15 seconds after the Pod initializes (<code>initialDelaySeconds</code>) and repeat every 20 seconds (<code>periodSeconds</code>). It checks whether a 200 response is returned; if not, the Pod will be terminated and recreated.</p>
<ol start="2">
<li><p><strong>Readiness Probe</strong>: This probe checks whether your application is ready to handle incoming traffic. If the readiness probe fails, Kubernetes will halt traffic to the Pod until it becomes ready.</p>
<p> In my configuration, the readiness probe is configured to perform an HTTP GET request to the <code>/status.php</code> endpoint on port 80 of my Pod. I have also created another PHP file (<code>status.php</code>).</p>
</li>
</ol>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>
$dbHost = getenv(<span class="hljs-string">'DB_HOST'</span>);
$dbUser = getenv(<span class="hljs-string">'DB_USER'</span>);
$dbPassword = getenv(<span class="hljs-string">'DB_PASSWORD'</span>);
$dbName = getenv(<span class="hljs-string">'DB_NAME'</span>);

$link = mysqli_connect($dbHost, $dbUser, $dbPassword, $dbName);

<span class="hljs-keyword">if</span> ($link) {
    $res = mysqli_query($link, <span class="hljs-string">"SELECT * FROM products LIMIT 1;"</span>);
    <span class="hljs-keyword">if</span> ($res) {
        http_response_code(<span class="hljs-number">200</span>);
        <span class="hljs-keyword">echo</span> <span class="hljs-string">"Application is healthy"</span>;
    } <span class="hljs-keyword">else</span> {
        http_response_code(<span class="hljs-number">500</span>);
        <span class="hljs-keyword">echo</span> <span class="hljs-string">"Application is not healthy: unable to query the database"</span>;
    }
} <span class="hljs-keyword">else</span> {
    http_response_code(<span class="hljs-number">500</span>);
    <span class="hljs-keyword">echo</span> <span class="hljs-string">"Application is not healthy: unable to connect to the database"</span>;
}
</code></pre>
<p>The probe will start 5 seconds after the Pod initializes (<code>initialDelaySeconds</code>) and repeat every 10 seconds (<code>periodSeconds</code>). This probe goes further by testing the application's connection to the database.  </p>
<p>Explore the GitHub repository to delve into manifests and code files: <a target="_blank" href="https://github.com/davWK/k8s-Resume-Challenge">GitHub Repo</a></p>
<h3 id="heading-to-be-continued">To be continued ...</h3>
<p>Currently, I'm delving into the extra credit part, which includes Package Everything in Helm, Implement Persistent Storage, and Implement CI/CD Pipeline. Stay tuned for the next updates!</p>
]]></content:encoded></item><item><title><![CDATA[From legacy to cloud serverless]]></title><description><![CDATA[Hello and welcome to this article in a journey of migrating a legacy-built app to the cloud. In this section, we will focus on three aspects: interfacing the application with Cloud Firestore, automating deployment, and exploring how Binary Authorizat...]]></description><link>https://blog.cloudvio.net/from-legacy-to-cloud-serverless-1-1-1</link><guid isPermaLink="true">https://blog.cloudvio.net/from-legacy-to-cloud-serverless-1-1-1</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707139227513/6820b32a-a78a-427c-9b50-35e6862ea577.png" alt class="image--center mx-auto" /></p>
<p>Hello and welcome to this article in a journey of migrating a legacy-built app to the cloud. In this section, we will focus on three aspects: interfacing the application with Cloud Firestore, automating deployment, and exploring how Binary Authorization can reinforce supply chain security while aligning with security policies.</p>
<p>If you're joining us midway, I encourage you to take a look at <a target="_blank" href="https://davidwoglo.hashnode.dev/series/legacy-to-serverless">the previous articles</a> to get up to speed. Otherwise, let's dive in! 😊</p>
<h1 id="heading-integrating-the-app-with-cloud-firestore">integrating the app with Cloud Firestore</h1>
<p>Our previous code interacted with MongoDB. With the migration to Google Cloud, we are transitioning away from MongoDB in favor of Firestore, which is Google Cloud's managed NoSQL document database built for automatic scaling, high performance, and ease of application development. To achieve this, we'll need to make modifications to our code, ensuring that our application seamlessly integrates and functions with Firestore.</p>
<p>We will replace the old MongoDB code with the following Firestore integration:</p>
<p>Old:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> pymongo <span class="hljs-keyword">import</span> MongoClient
<span class="hljs-keyword">from</span> bson.objectid <span class="hljs-keyword">import</span> ObjectId
<span class="hljs-keyword">import</span> mongomock
...
<span class="hljs-keyword">if</span> os.environ.get(<span class="hljs-string">'TESTING'</span>):
    client = mongomock.MongoClient()
<span class="hljs-keyword">else</span>:
    client = MongoClient(os.environ[<span class="hljs-string">'MONGO_URI'</span>])
db = client.flask_db
todos = db.todos
</code></pre>
<p>New:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> google.auth <span class="hljs-keyword">import</span> compute_engine
<span class="hljs-keyword">from</span> google.cloud <span class="hljs-keyword">import</span> firestore
...
credentials = compute_engine.Credentials()
db = firestore.Client(credentials=credentials)
todos = db.collection(<span class="hljs-string">'todos'</span>)
</code></pre>
<ol>
<li><p><code>from google.auth import compute_engine</code>: This line imports the <code>compute_engine</code> module from the <code>google.auth</code> library, which is used for authentication in Google Cloud environments</p>
</li>
<li><p><code>from</code><a target="_blank" href="http://google.cloud"><code>google.cloud</code></a><code>import firestore</code>: This line imports the <code>firestore</code> module from the <a target="_blank" href="http://google.cloud"><code>google.cloud</code></a> library, enabling interaction with Google Cloud Firestore.</p>
</li>
<li><p>The <code>compute_engine.Credentials()</code> call retrieves the default credentials provided by Google Cloud in its environment. These credentials are essential for authenticating with Firestore. In a local or non-Google Cloud service environment, you would need to generate a service account key before being able to authenticate with Firestore. However, in our case, since the code will be deployed on Cloud Run, authentication will be handled using the default service account of Cloud Run.</p>
</li>
<li><p><code>todos = db.collection('todos')</code>. Here, we're defining a Firestore collection. Collections are used to organize documents in Firestore.</p>
</li>
</ol>
<p><strong>Data Insertion</strong>: When a POST request is made, the new todo item is added to the Firestore collection 'todos' using the <code>add</code> method. The data is stored as a dictionary.</p>
<pre><code class="lang-python"><span class="hljs-meta">@app.route('/', methods=['GET', 'POST'])</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">index</span>():</span>
    <span class="hljs-keyword">if</span> request.method == <span class="hljs-string">'POST'</span>:
        content = request.form.get(<span class="hljs-string">'content'</span>)
        degree = request.form.get(<span class="hljs-string">'degree'</span>)
        todos.add({<span class="hljs-string">'content'</span>: content, <span class="hljs-string">'degree'</span>: degree})
</code></pre>
<p>Old:</p>
<pre><code class="lang-python">todos.insert_one({<span class="hljs-string">'content'</span>: content, <span class="hljs-string">'degree'</span>: degree})
</code></pre>
<p>New:</p>
<pre><code class="lang-python">todos.add({<span class="hljs-string">'content'</span>: content, <span class="hljs-string">'degree'</span>: degree})
</code></pre>
<p>This modification reflects the adjustment needed in the code for Firestore, moving from the <code>insert_one</code> method in MongoDB to the <code>add</code> method in Firestore for adding documents.</p>
<p><strong>Data Retrieval</strong>: In the new code, we utilize <a target="_blank" href="http://todos.stream"><code>todos.stream</code></a><code>()</code> to obtain a stream of documents from the Firestore collection. In the old code, we used <code>todos.find()</code> to get a cursor to the documents in the MongoDB collection.</p>
<p>Old:</p>
<pre><code class="lang-python">all_todos = todos.find()
</code></pre>
<p>New:</p>
<pre><code class="lang-python">all_todos = [{<span class="hljs-string">'_id'</span>: doc.id, **doc.to_dict()} <span class="hljs-keyword">for</span> doc <span class="hljs-keyword">in</span> todos.stream()]
</code></pre>
<p>We now use <a target="_blank" href="http://todos.stream"><code>todos.stream</code></a><code>()</code> to iterate over documents and convert them to a dictionary format for retrieval. The '_id' field represents the document ID in Firestore.</p>
<p><strong>Data Deletion</strong>: In the new code, we employ <code>todos.document(id).delete()</code> to remove a document from the Firestore collection. In the old code, we used <code>todos.delete_one({"_id": ObjectId(id)})</code> to delete a document from the MongoDB collection.</p>
<p>Old:</p>
<pre><code class="lang-python">todos.delete_one({<span class="hljs-string">"_id"</span>: ObjectId(id)})
</code></pre>
<p>New:</p>
<pre><code class="lang-python">todos.document(id).delete()
</code></pre>
<p>The <code>todos.document(id).delete()</code> method is used to delete a specific document by its ID in Firestore.</p>
<p>After all these updates, the new <a target="_blank" href="http://app.py"><code>app.py</code></a> should look like this:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask, render_template, request, url_for, redirect
<span class="hljs-keyword">from</span> google.auth <span class="hljs-keyword">import</span> compute_engine
<span class="hljs-keyword">from</span> google.cloud <span class="hljs-keyword">import</span> firestore

app = Flask(__name__, template_folder=<span class="hljs-string">'templates'</span>)

<span class="hljs-comment"># Use the default credentials provided by the Cloud Run environment</span>
credentials = compute_engine.Credentials()

<span class="hljs-comment"># Use these credentials to authenticate with Firestore</span>
db = firestore.Client(credentials=credentials)

todos = db.collection(<span class="hljs-string">'todos'</span>)

<span class="hljs-meta">@app.route('/', methods=['GET', 'POST'])</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">index</span>():</span>
    <span class="hljs-keyword">if</span> request.method == <span class="hljs-string">'POST'</span>:
        content = request.form.get(<span class="hljs-string">'content'</span>)
        degree = request.form.get(<span class="hljs-string">'degree'</span>)
        todos.add({<span class="hljs-string">'content'</span>: content, <span class="hljs-string">'degree'</span>: degree})
        <span class="hljs-keyword">return</span> redirect(url_for(<span class="hljs-string">'index'</span>))

    all_todos = [{<span class="hljs-string">'_id'</span>: doc.id, **doc.to_dict()} <span class="hljs-keyword">for</span> doc <span class="hljs-keyword">in</span> todos.stream()]
    <span class="hljs-keyword">return</span> render_template(<span class="hljs-string">'index.html'</span>, todos=all_todos)

<span class="hljs-meta">@app.route('/&lt;id&gt;/delete/', methods=['POST'])</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">delete</span>(<span class="hljs-params">id</span>):</span>
    todos.document(id).delete()
    <span class="hljs-keyword">return</span> redirect(url_for(<span class="hljs-string">'index'</span>))
</code></pre>
<p>The adjustments include using Firestore methods for data insertion (<code>todos.add()</code>), retrieval (<a target="_blank" href="http://todos.stream"><code>todos.stream</code></a><code>()</code>), and deletion (<code>todos.document(id).delete()</code>), along with integrating the appropriate syntax for Firestore operations.</p>
<h2 id="heading-testing-the-new-code">Testing the new code</h2>
<p>To ensure the correctness of the new '<a target="_blank" href="http://app.py">app.py</a>' code, we have to update also the testing approach. The tests aim to verify the functionality of critical components, such as data insertion and deletion, within the context of Firestore integration.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> unittest
<span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> patch, MagicMock
<span class="hljs-keyword">from</span> app <span class="hljs-keyword">import</span> app

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestApp</span>(<span class="hljs-params">unittest.TestCase</span>):</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">setUp</span>(<span class="hljs-params">self</span>):</span>
        self.app = app.test_client()

<span class="hljs-meta">    @patch('app.todos.add')</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_index_post</span>(<span class="hljs-params">self, mock_add</span>):</span>
        response = self.app.post(<span class="hljs-string">'/'</span>, data={<span class="hljs-string">'content'</span>: <span class="hljs-string">'Test Todo'</span>, <span class="hljs-string">'degree'</span>: <span class="hljs-string">'Test Degree'</span>})

        mock_add.assert_called_once_with({<span class="hljs-string">'content'</span>: <span class="hljs-string">'Test Todo'</span>, <span class="hljs-string">'degree'</span>: <span class="hljs-string">'Test Degree'</span>})
        self.assertEqual(response.status_code, <span class="hljs-number">302</span>)

<span class="hljs-meta">    @patch('app.todos.document')</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_delete</span>(<span class="hljs-params">self, mock_document</span>):</span>
        mock_delete = MagicMock()
        mock_document.return_value.delete = mock_delete

        response = self.app.post(<span class="hljs-string">'/123/delete/'</span>)

        mock_delete.assert_called_once()
        self.assertEqual(response.status_code, <span class="hljs-number">302</span>)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    unittest.main()
</code></pre>
<p>Let's delve into the primary components of this testing suite:</p>
<ol>
<li><p><strong>Data Insertion Test (</strong><code>test_index_post</code>):</p>
<ul>
<li><p>This test simulates a POST request to the root endpoint ('/') of the application when a new todo item is added.</p>
</li>
<li><p>The <code>@patch</code> decorator is utilized to mock the <code>todos.add</code> method, ensuring that actual Firestore interactions are bypassed during testing.</p>
</li>
<li><p>The test asserts that the 'add' method is called with the expected data, and the response status code is as expected (302 for a successful redirect).</p>
</li>
</ul>
</li>
<li><p><strong>Data Deletion Test (</strong><code>test_delete</code>):</p>
<ul>
<li><p>This test mimics a POST request to the endpoint for deleting a specific todo item ('//delete/').</p>
</li>
<li><p>The <code>@patch</code> decorator is applied to mock the <code>todos.document</code> method, and a MagicMock is used to mock the 'delete' method of the Firestore document.</p>
</li>
<li><p>The test verifies that the 'delete' method is called once and asserts the response status code after the deletion operation (302 for a successful redirect).</p>
</li>
</ul>
</li>
</ol>
<p>These tests ensure that data insertion and deletion operations interact seamlessly with Firestore. The use of mocking allows for isolated testing, focusing on specific components without the need for actual Firestore connections during the testing phase.</p>
<p>Now that the test has been added, you can re-run your Cloud Build pipeline to address any potential minor issues before proceeding with the deployment on Cloud Run.</p>
<h1 id="heading-automating-deployment-cd">Automating deployment (CD)</h1>
<p>To automate the deployment on Cloud Run after building and pushing the image, add the following step to your Cloud Build configuration (<code>cloudbuild.yaml</code>):</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Step 8: Deploy the image to Cloud Run</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'deploy-image'</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">'gcr.io/google.com/cloudsdktool/cloud-sdk'</span>
  <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">'gcloud'</span>
  <span class="hljs-attr">args:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'run'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'deploy'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'$_SERVICE_NAME'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'--image'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'--region'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'$_REGION'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'--platform'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'managed'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">'--allow-unauthenticated'</span>
  <span class="hljs-attr">waitFor:</span> [<span class="hljs-string">'push-image'</span>]
</code></pre>
<p>This segment of the Cloud Build configuration file handles the deployment of the Docker image to Google Cloud Run. Here's a breakdown of each line's purpose:</p>
<ul>
<li><p><code>id: 'deploy-image'</code>: Provides a unique identifier for this step within the Cloud Build configuration file.</p>
</li>
<li><p><code>name: '</code><a target="_blank" href="http://gcr.io/google.com/cloudsdktool/cloud-sdk"><code>gcr.io/google.com/cloudsdktool/cloud-sdk</code></a><code>'</code>: Specifies the Docker image to be used for this step, which, in this case, is the Google Cloud SDK image.</p>
</li>
<li><p><code>entrypoint: 'gcloud'</code>: Sets the Docker entrypoint to 'gcloud,' the command-line interface for Google Cloud Platform.</p>
</li>
<li><p><code>args</code>: A list of arguments passed to the 'gcloud' command.</p>
<ul>
<li><p><code>'run' 'deploy' '$_SERVICE_NAME'</code>: Deploys a new revision of the Cloud Run service identified by <code>$_SERVICE_NAME</code>.</p>
</li>
<li><p><code>'--image' '$_</code><a target="_blank" href="http://REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA"><code>REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA</code></a><code>'</code>: Specifies the Docker image to deploy, located in the designated Google Cloud Artifact Registry repository.</p>
</li>
<li><p><code>'--region' '$_REGION'</code>: Specifies the region where the Cloud Run service is deployed.</p>
</li>
<li><p><code>'--platform' 'managed'</code>: Indicates that the Cloud Run service uses the fully managed version of Cloud Run.</p>
</li>
<li><p><code>'--allow-unauthenticated'</code>: Permits unauthenticated requests to the Cloud Run service.</p>
</li>
</ul>
</li>
<li><p><code>waitFor: ['push-image']</code>: Directs Cloud Build to wait for the completion of the 'push-image' step before initiating this step.</p>
</li>
<li><p>Afterwards, don't forget to update the substitution section of your Cloud Build configuration to reflect the variables used in this new step.</p>
</li>
</ul>
<p>Now, you can push your code to trigger the pipeline. If the pipeline runs successfully, you will obtain the access link for your application. Navigate to the Google Cloud Console, go to Cloud Run, and click on the name of your newly deployed service to retrieve the access URL for your application.</p>
<h1 id="heading-binary-authorization">Binary Authorization</h1>
<p>The Binary Authorization is a security control mechanism during the deployment of container images on Google Cloud platforms like Cloud Run, GKE, Anthos Service Mesh, and Anthos Clusters. Its primary function is to either authorize or control the deployment of images that have been attested as secure or trusted. This attestation involves subjecting the image to various processes such as testing, vulnerability scanning, and even manual signatures. Only after the image meets predefined conditions is it considered validated and allowed for deployment on the platforms.</p>
<p>Binary Authorization is responsible for defining and enforcing this policy. The controls or checks are executed using attestators, which can be custom-created, fixed, or generated by tools like Cloud Build (currently in preview).</p>
<p>For this project, I explored configuring Binary Authorization using the built-by-cloud-build attestor to deploy only images built by Cloud Build. With a well-crafted and robust Cloud Build configuration (incorporating various tests, vulnerability analyses, etc.), this approach can significantly save time compared to creating and using a custom attestor. However, as of the time of writing this article, Binary Authorization with cloud build attestor is in preview.</p>
<p>The main challenge with using the built-by-cloud-build attestor is that it is generated only once during the build with Cloud Build. This may not align well with continuous delivery (CD), especially for recurrent executions of the CI/CD pipeline. Ideally, with each new run of the pipeline, a new attestor should be generated to update the Binary Authorization policy. This becomes problematic, especially if the Binary Authorization policy is configured at the organization level, impacting all other deployments. From a personal perspective, to address this, it would be beneficial if Cloud Build generates the attestor once and uses it for subsequent pipeline executions. Currently, the custom attestor provides a workaround for this limitation. However, for simplicity, it would be ideal if Cloud Build handles this process seamlessly.</p>
<p>Follow <a target="_blank" href="https://cloud.google.com/binary-authorization/docs/run/overview">this link</a> for the setup of Binary Authorization."</p>
<p>This concludes the article. Thank you for reading. You can find the configurations and code for this project in the following <a target="_blank" href="https://github.com/davWK/legacy-to-cloud-serverless">Git repository</a>.</p>
]]></content:encoded></item><item><title><![CDATA[From legacy to cloud serverless]]></title><description><![CDATA[Hey, how's it been since the last article? If you haven't had a chance to check out the previous installment in the series, I invite you to discover it here. Perhaps you've already tackled something similar to what was described in the previous artic...]]></description><link>https://blog.cloudvio.net/from-legacy-to-cloud-serverless-1</link><guid isPermaLink="true">https://blog.cloudvio.net/from-legacy-to-cloud-serverless-1</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<p>Hey, how's it been since the last article? If you haven't had a chance to check out the previous installment in the series, I invite you to discover it <a target="_blank" href="https://davidwoglo.hashnode.dev/from-legacy-to-cloud-serverless">here</a>. Perhaps you've already tackled something similar to what was described in the previous article, and this one seems to be a good resource to continue your project. Welcome aboard!</p>
<p>In this article, we'll be transforming Docker Compose services into Kubernetes objects and deploying them in a Kubernetes environment.</p>
<p>To follow along, you'll need some knowledge of Kubernetes, have completed the lab described in <a target="_blank" href="https://davidwoglo.hashnode.dev/from-legacy-to-cloud-serverless">the previous article</a>, or have done something similar, make sure you have a Kubernetes environment ready. As of now, I'm using Digital Ocean's Kubernetes Engine. I mention 'as of now' because if you've been here from the beginning, you're probably aware that our project's ultimate goal isn't just deploying on K8s. It's a journey of migrating a traditional app to a serverless cloud setup. The next step in this series will involve migrating to Google Cloud. Oh, did I forget to mention? I'm all about Google Cloud—I recently even snagged my Professional Cloud Architect certification. So, expect Google Cloud to pop up regularly in my discussions, and the rest of this series will be purely GCP-focused.</p>
<p>Enough chatter, let's dive into the real stuff!</p>
<h1 id="heading-build-the-application-image-and-push-it-to-the-docker-registry">Build the application image and push it to the Docker registry</h1>
<p>If you haven't done so already, I invite you to clone our project's repo <a target="_blank" href="https://github.com/davWK/legacy-to-cloud-serverless.git">here</a>. Navigate to the <code>docker</code> folder, where all the Docker-related elements of the project are stored. Explore the content a bit, and once you're ready, come back, and let's continue. If you don't have a Docker Hub account yet, I recommend creating one.</p>
<p>Now, in your terminal, log in with <code>docker login</code> using your Docker Hub account information. After that, build the image, tagging it with your username and the image name.</p>
<pre><code class="lang-bash">docker build -t &lt;username&gt;/&lt;image-name&gt; .
</code></pre>
<p>Finally, push the image to Docker Hub.</p>
<pre><code class="lang-bash">docker push &lt;username&gt;/&lt;image-name&gt;
</code></pre>
<h1 id="heading-export-mongodb-data">Export MongoDB data</h1>
<p>As part of our migration process, it's crucial to ensure we retain our data. To achieve this, let's export the data stored in the MongoDB container that we'll later use when deploying MongoDB on Kubernetes.</p>
<p>Export the existing MongoDB database from the Docker Compose setup:</p>
<ol>
<li><p>Access the MongoDB database container shell.</p>
<pre><code class="lang-bash"> docker <span class="hljs-built_in">exec</span> -it &lt;mongo_db_service&gt; bash
</code></pre>
</li>
<li><p>Export all data from the MongoDB database.</p>
<pre><code class="lang-bash"> mongodump &lt;file name&gt;
</code></pre>
</li>
<li><p>Exit the MongoDB database container shell.</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">exit</span>
</code></pre>
</li>
<li><p>Copy the 'dump' folder from the MongoDB container to a specified destination.</p>
<pre><code class="lang-bash"> docker cp &lt;mongo_db_service&gt;:/dump &lt;destination&gt;
</code></pre>
</li>
</ol>
<h1 id="heading-install-mongodb-on-kubernetes">Install MongoDB on Kubernetes</h1>
<p>Now, while connected to the Kubernetes cluster, let's install MongoDB using Helm:</p>
<pre><code class="lang-bash">helm install mongo-helm oci://registry-1.docker.io/bitnamicharts/mongodb --<span class="hljs-built_in">set</span> auth.rootUser=root,auth.rootPassword=<span class="hljs-string">"defineYourRootPassword"</span>
</code></pre>
<p>This command leverages Helm, a Kubernetes package manager, to install MongoDB from a chart hosted on Docker's registry.</p>
<ul>
<li>The part <code>--set auth.rootUser=root,auth.rootPassword="DefineYourPassword"</code> specifies the username and password for the MongoDB root user.</li>
</ul>
<p>Make sure to save the output of this command; we'll be using it to construct the database connection URI.</p>
<p>Verify that everything is installed correctly with the following commands:</p>
<pre><code class="lang-bash">kubectl get pods
kubectl get services
</code></pre>
<h1 id="heading-restore-data">Restore data</h1>
<p>It's time to restore the database:</p>
<ol>
<li><p>Navigate to MongoDB Kubernetes pod</p>
<pre><code class="lang-bash"> kubectl <span class="hljs-built_in">exec</span> -it --namespace default mongodb_pod -- /bin/bash
 mongosh
 use admin
 db.auth(<span class="hljs-string">'root'</span>, <span class="hljs-string">'password'</span>)
</code></pre>
</li>
<li><p>Create a non-root MongoDB user:</p>
</li>
</ol>
<pre><code class="lang-bash">db.createUser({
  user: <span class="hljs-string">'username'</span>,
  <span class="hljs-built_in">pwd</span>: <span class="hljs-string">'password'</span>,
  roles: [
    { role: <span class="hljs-string">'readWriteAnyDatabase'</span>, db: <span class="hljs-string">'admin'</span> },
    { role: <span class="hljs-string">'dbAdminAnyDatabase'</span>, db: <span class="hljs-string">'admin'</span> },
    { role: <span class="hljs-string">'clusterAdmin'</span>, db: <span class="hljs-string">'admin'</span> }
  ]
})
<span class="hljs-built_in">exit</span>
</code></pre>
<ol>
<li>Restore the Docker Compose database dump to the new MongoDB pod:</li>
</ol>
<p>Copy the database dump folder previously copied into the MongoDB pod:</p>
<pre><code class="lang-bash">kubectl cp &lt;mongodb_dump_location_filename&gt; &lt;mongodb_pod&gt;:/tmp/
</code></pre>
<p>Navigate to the MongoDB pod shell:</p>
<pre><code class="lang-bash">kubectl <span class="hljs-built_in">exec</span> -it --namespace default mongodb_pod -- /bin/bash
</code></pre>
<p>Change the directory to the dump directory and list all MongoDB folders to verify the contents:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /tmp/dump
ls
</code></pre>
<p>Restore the app database:</p>
<pre><code class="lang-bash">mongorestore --uri=<span class="hljs-string">"mongodb://username:password@localhost:27017/?authSource=admin"</span> app_db -d app_db
</code></pre>
<p>Here's how it's formed:</p>
<ul>
<li><p><code>mongodb://</code>: This is the prefix to identify that we're connecting to a MongoDB instance.</p>
</li>
<li><p><code>&lt;username&gt;:&lt;password&gt;@</code>: This part specifies the username and password to connect to the MongoDB instance. You would replace <code>&lt;username&gt;</code> and <code>&lt;password&gt;</code> with the actual username and password. In your case, the username is the one created earlier.</p>
</li>
<li><p><code>mongo-helm-mongodb.default.svc.cluster.local:27017</code>: This is the host and port where the MongoDB server is running. <code>mongo-helm-mongodb.default.svc.cluster.local</code> is the DNS name for the MongoDB service in your Kubernetes cluster, and <code>27017</code> is the default port for MongoDB.</p>
</li>
</ul>
<p>Exit the MongoDB pod shell:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">exit</span>
</code></pre>
<p>The new MongoDB is now ready for use.</p>
<h1 id="heading-deploy-and-connect-the-application-to-the-database">Deploy and connect the application to the database</h1>
<p>First, let's create the Kubernetes secret that will contain the connection string for the database. We're using the secret object because our connection string contains sensitive information. Kubernetes provides the secret object precisely for scenarios like this. If it were just configuration information or environment variables, a ConfigMap object would be more suitable.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Secret</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">app-secret</span>
<span class="hljs-attr">type:</span> <span class="hljs-string">Opaque</span>
<span class="hljs-attr">data:</span>
  <span class="hljs-attr">mongo-uri:</span> <span class="hljs-string">&lt;base64-encoded-mongo-uri&gt;</span>
</code></pre>
<p>Create a YAML file and paste this content into it. Name the file as you see fit. Note that the <code>mongodb-uri</code> field under <code>data</code> should contain the base64-encoded MongoDB URI. Replace the placeholder with the actual base64-encoded connection string.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">legacy-to-cloud-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">legacy-to-cloud</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">legacy-to-cloud</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">legacy-to-cloud</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">legacy-to-cloud</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">docker_username/image_name:tag</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">5000</span>
        <span class="hljs-attr">env:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MONGO_URI</span>
          <span class="hljs-attr">valueFrom:</span>
            <span class="hljs-attr">secretKeyRef:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">mongodb-uri-secret</span>
              <span class="hljs-attr">key:</span> <span class="hljs-string">mongodb-uri</span>

<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">legacy-to-cloud-service</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">legacy-to-cloud</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-number">5000</span>
</code></pre>
<p>Create a second YAML file for your application's manifest and paste this content into it. The <code>env</code> section of the container in the Deployment references the MongoDB URI from the secret we created earlier. Ensure that the secret name and key match the values used in the secret manifest. Also, ensure that the selector in the Service matches the one in the Deployment. This is crucial for linking the pods to the service.</p>
<p>If everything looks good, let's proceed with deploying our application. You can use the following command to validate the syntax of your YAML file and perform a dry run:</p>
<pre><code class="lang-bash">kubectl apply -f filename.yaml --dry-run=client --validate=<span class="hljs-literal">true</span>
</code></pre>
<p>This command checks the syntax of your YAML file and prints out the resources that would be created or modified without actually applying the changes. If there are any syntax errors, this command will highlight them.</p>
<p>If everything is okay, create the resources with the following command:</p>
<pre><code class="lang-bash">kubectl apply -f filename1.yaml -f filename2.yaml
</code></pre>
<p>Replace <code>filename1.yaml</code> and <code>filename2.yaml</code> with the actual names of your YAML files.</p>
<p>Get the access IP address with the command:</p>
<pre><code class="lang-bash">kubectl get svc
</code></pre>
<p>Identify the service for your application and copy its external IP. Paste it into your browser to access the application.</p>
<p>Well, that wraps up this section on the migration to Kubernetes.</p>
<h1 id="heading-a-little-gift-for-the-road">A little gift for the road?</h1>
<p>Haha, did you know there's a tool to speed things up? Because here, we've created YAML manifests to deploy K8s resources. This deployment is a simple one, but imagine if it were a massive deployment with hundreds of Docker Compose services, unimaginable complexities, etc. Would we sit down and manually create manifests for all that complexity? Of course not :) Enter Kompose. Kompose is a conversion tool for Docker Compose to container orchestrators like Kubernetes. It takes a Docker Compose file and translates it into Kubernetes resources.</p>
<p>Kompose is a handy tool for those familiar with Docker Compose but aiming to deploy their application on Kubernetes. It automates the creation of Kubernetes deployments, services, and other resources based on the services defined in the Docker Compose file.</p>
<p>However, it's worth noting that not all Docker Compose features and options are supported by Kompose, so some manual tweaking of the generated Kubernetes resources might be necessary. Here's <a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-migrate-a-docker-compose-workflow-to-kubernetes#step-3-translating-compose-services-to-kubernetes-objects-with-kompose">an excellent guide</a> that addresses our use case well.</p>
<h1 id="heading-what-next">What next?</h1>
<p>And that's a wrap for this article! In the next one, we're heading to the GOOGLE CLOUUUUUUUD :) and beginning to introduce DevOps tools and practices to automate and speed up our work. We're talking about stepping up the game. We'll be using Google Cloud DevOps tools—Cloud Build for CI/CD, Artifact Registry for container images, GKE for deployments. Plus, we'll dive into DevSecOps tools and practices, leveraging the security available within the Google Cloud ecosystem.</p>
<p>Thanks for reading, and see you soon in the next article in the series!"</p>
]]></content:encoded></item><item><title><![CDATA[From legacy to cloud serverless]]></title><description><![CDATA[Welcome to the first article in a series that will walk you through the process of migrating a legacy app from on-premises to the cloud, with a focus on modernization, serverless platforms, and integrated DevOps practices.
In this article, we will fo...]]></description><link>https://blog.cloudvio.net/from-legacy-to-cloud-serverless</link><guid isPermaLink="true">https://blog.cloudvio.net/from-legacy-to-cloud-serverless</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<p>Welcome to the first article in a series that will walk you through the process of migrating a legacy app from on-premises to the cloud, with a focus on modernization, serverless platforms, and integrated DevOps practices.</p>
<p>In this article, we will focus on containerizing your app. However, if you're building an app from scratch, that's perfectly fine (in fact, it's even better). For this example, I'm using <a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-use-mongodb-in-a-flask-application">this DigitalOcean guide</a> to build a simple TODO app using Python (Flask) and MongoDB as the database. I've made some customizations to make it look better, but the main point is to build something that uses a NoSQL document-based database, as this will be required for the upcoming work.</p>
<p>You can clone the repository of the app <a target="_blank" href="https://github.com/davWK/legacy-to-cloud-serverless">here</a> on GitHub if you haven't built your own.</p>
<p>Once you have your app built, let's get started!</p>
<h1 id="heading-dockerfile">Dockerfile</h1>
<p>Here is the structure of the application directory that we will containerize, followed by the Dockerfile.</p>
<pre><code class="lang-bash">.
├── app.py
├── LICENSE
├── README.md
├── requirements.txt
├── static
│   └── style.css
└── templates
    └── index.html
</code></pre>
<p>The <a target="_blank" href="http://app.py">app.py</a> file is the main application file that contains the Flask app code. The requirements.txt file contains the list of Python dependencies required by the application. The static/ directory contains static files such as CSS, JavaScript, and images. The templates/ directory contains the HTML templates used by the Flask app.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Use a minimal base image</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">python:3.9.7-slim-buster</span> <span class="hljs-string">AS</span> <span class="hljs-string">base</span>

<span class="hljs-comment"># Create a non-root user</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">useradd</span> <span class="hljs-string">-m</span> <span class="hljs-string">-s</span> <span class="hljs-string">/bin/bash</span> <span class="hljs-string">flaskuser</span>
<span class="hljs-string">USER</span> <span class="hljs-string">flaskuser</span>

<span class="hljs-comment"># Set the working directory</span>
<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/app</span>

<span class="hljs-comment"># Copy the requirements file and install dependencies</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">requirements.txt</span> <span class="hljs-string">.</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">pip</span> <span class="hljs-string">install</span> <span class="hljs-string">--no-cache-dir</span> <span class="hljs-string">-r</span> <span class="hljs-string">requirements.txt</span>

<span class="hljs-comment"># Add the directory containing the flask command to the PATH</span>
<span class="hljs-string">ENV</span> <span class="hljs-string">PATH="/home/flaskuser/.local/bin:${PATH}"</span>

<span class="hljs-comment"># Use a multi-stage build to minimize the size of the image</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">base</span> <span class="hljs-string">AS</span> <span class="hljs-string">final</span>

<span class="hljs-comment"># Copy the app code</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">app.py</span> <span class="hljs-string">.</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">templates</span> <span class="hljs-string">templates/</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">static</span> <span class="hljs-string">static/</span>

<span class="hljs-comment"># Set environment variables</span>
<span class="hljs-string">ENV</span> <span class="hljs-string">FLASK_APP=app.py</span>
<span class="hljs-string">ENV</span> <span class="hljs-string">FLASK_ENV=production</span>

<span class="hljs-comment"># Expose the port</span>
<span class="hljs-string">EXPOSE</span> <span class="hljs-number">5000</span>

<span class="hljs-comment"># Run the app</span>
<span class="hljs-string">CMD</span> [<span class="hljs-string">"flask"</span>, <span class="hljs-string">"run"</span>, <span class="hljs-string">"--host=0.0.0.0"</span>]
</code></pre>
<p>Here's a walkthrough and breakdown of the Dockerfile:</p>
<ol>
<li><p>The Dockerfile starts with a <code>FROM</code> instruction that specifies the base image to use. In this case, it's <code>python:3.9.7-slim-buster</code>, which is a minimal base image that includes Python 3.9.7 and some essential libraries.</p>
</li>
<li><p>The next instruction creates a non-root user named <code>flaskuser</code> using the <code>RUN</code> and <code>useradd</code> commands. This is a security best practice to avoid running the container as the root user.</p>
</li>
<li><p>The <code>WORKDIR</code> instruction sets the working directory to <code>/app</code>, which is where the application code will be copied.</p>
</li>
<li><p>The <code>COPY</code> instruction copies the <code>requirements.txt</code> file to the container's <code>/app</code> directory.</p>
</li>
<li><p>The <code>RUN</code> instruction installs the dependencies listed in <code>requirements.txt</code> using <code>pip</code>. The <code>--no-cache-dir</code> option is used to avoid caching the downloaded packages, which helps to keep the image size small.</p>
</li>
<li><p>The <code>ENV</code> instruction adds the directory containing the <code>flask</code> command to the <code>PATH</code> environment variable. This is necessary to run the <code>flask</code> command later.</p>
</li>
<li><p>The <code>FROM</code> instruction starts a new build stage using the <code>base</code> image defined earlier. This is a multi-stage build that helps to minimize the size of the final image.</p>
</li>
<li><p>The <code>COPY</code> instruction copies the application code (<a target="_blank" href="http://app.py"><code>app.py</code></a>), templates (<code>templates/</code>), and static files (<code>static/</code>) to the container's <code>/app</code> directory.</p>
</li>
<li><p>The <code>ENV</code> instruction sets the <code>FLASK_APP</code> and <code>FLASK_ENV</code> environment variables. <code>FLASK_APP</code> specifies the name of the main application file, and <code>FLASK_ENV</code> sets the environment to <code>production</code>.</p>
</li>
<li><p>The <code>EXPOSE</code> instruction exposes port <code>5000</code>, which is the default port used by Flask.</p>
</li>
<li><p>The <code>CMD</code> instruction specifies the command to run when the container starts. In this case, it runs the <code>flask run</code> command with the <code>--host=0.0.0.0</code> option to bind to all network interfaces.</p>
</li>
</ol>
<p>With this Dockerfile, the application can be containerized and executed. However, it's important to note that our app requires a database to store the data created or generated while it's running. Of course, you could separately pull a MongoDB database image and run it independently. Then, make adjustments on both sides to establish communication between the two containers so that the app can successfully store data in the database. While this approach works, it may consume time and be a bit tedious. To streamline the process, we will instead move forward with Docker Compose. In Docker Compose, everything is declared in a YAML file, and by using the <code>docker-compose up</code> command, we can start and operate the different services seamlessly, saving time and effort.</p>
<h1 id="heading-streamlining-database-integration-with-docker-compose">Streamlining Database Integration with Docker Compose</h1>
<p>Here is the basic Docker Compose YAML file that we will use to streamline the process.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3.9'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">db:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo:4.4.14</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"27017:27017"</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongo-data:/data/db</span>

  <span class="hljs-attr">web:</span>
    <span class="hljs-attr">build:</span> <span class="hljs-string">.</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">"myflaskapp"</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"5000:5000"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">MONGO_URI=mongodb://db:27017</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">db</span>

<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">mongo-data:</span>
</code></pre>
<p>This Docker Compose YAML file is configured to set up two services: a MongoDB database (<code>db</code>) and a web application (<code>web</code>). Here's a breakdown:</p>
<ul>
<li><p><strong>Version:</strong> Specifies the version of the Docker Compose file format being used (<code>3.9</code> in this case).</p>
</li>
<li><p><strong>Services:</strong></p>
<ul>
<li><p><strong>Database (</strong><code>db</code>):</p>
<ul>
<li><p>Uses the MongoDB version <code>4.4.14</code> image.</p>
</li>
<li><p>Maps the host port <code>27017</code> to the container port <code>27017</code>.</p>
</li>
<li><p>Utilizes a volume named <code>mongo-data</code> to persistently store MongoDB data.</p>
</li>
</ul>
</li>
<li><p><strong>Web Application (</strong><code>web</code>):</p>
<ul>
<li><p>Builds the Docker image from the current directory (<code>.</code>).</p>
</li>
<li><p>Sets the container name as "myflaskapp."</p>
</li>
<li><p>Maps the host port <code>5000</code> to the container port <code>5000</code>.</p>
</li>
<li><p>Defines an environment variable <code>MONGO_URI</code> with the value <code>mongodb://db:27017</code>, establishing a connection to the MongoDB service.</p>
</li>
<li><p>Specifies a dependency on the <code>db</code> service, ensuring that the database is started before the web service.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Volumes:</strong></p>
<ul>
<li>Defines a volume named <code>mongo-data</code> for persisting MongoDB data.</li>
</ul>
</li>
</ul>
<p>In summary, this Docker Compose file orchestrates the deployment of a MongoDB database and a Flask web application, ensuring they can communicate and function together seamlessly.</p>
<p>Now navigate to the directory with the Docker Compose file and run <code>docker-compose up</code> to start MongoDB and a Flask web app. Access the app at <code>http://localhost:5000</code> to ensure everything works as expected.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696780578097/bf79e8c0-f0c2-4f0e-b69e-faccc92f1aa8.png" alt class="image--center mx-auto" /></p>
<p>To stop, use <code>docker-compose down</code>.</p>
<p>All good? Next up: migrating the workflow to Kubernetes in the next article.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696780189965/eaa848eb-c48c-4e04-b5dc-b8f80d68bfcc.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Deploying and Securing an App on AWS EKS with Gitlab CI/CD and Checkov]]></title><description><![CDATA[Introduction
Deploying an application on AWS EKS (Elastic Kubernetes Service) can be a powerful way to ensure scalability and reliability for your application. However, the process can be complex and time-consuming, especially when it comes to ensuri...]]></description><link>https://blog.cloudvio.net/deploying-and-securing-an-app-on-aws-eks-with-gitlab-cicd-and-checkov</link><guid isPermaLink="true">https://blog.cloudvio.net/deploying-and-securing-an-app-on-aws-eks-with-gitlab-cicd-and-checkov</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Deploying an application on AWS EKS (Elastic Kubernetes Service) can be a powerful way to ensure scalability and reliability for your application. However, the process can be complex and time-consuming, especially when it comes to ensuring the security and compliance of your deployment. In this article, we'll show you how to simplify the process and ensure your deployment is secure with GitLab CI/CD and Checkov. GitLab CI/CD provides a powerful toolset for automating the deployment process and improving collaboration among team members, while Checkov is a Security as Code tool that can help you automatically scan your configuration files for potential security and compliance issues. By integrating these tools into your deployment pipeline, you can ensure your deployment is secure and compliant with industry best practices, all while saving time and effort.</p>
<h1 id="heading-prerequisites">Prerequisites</h1>
<p>Before proceeding you will need the following :</p>
<ul>
<li><p>Set up a GitLab project with runners to execute CI/CD jobs</p>
</li>
<li><p>A container registry (a docker hub repo is more than enough)</p>
</li>
<li><p>A running AWS EKS cluster</p>
</li>
<li><p>Some knowledge of Docker and Kubernetes</p>
</li>
</ul>
<h1 id="heading-the-different-steps-are-as-follows">The different steps are as follows</h1>
<ol>
<li><p>Set up the application code and the Dockerfile</p>
</li>
<li><p>Define CI/CD GitLab variables</p>
</li>
<li><p>Set Kubernetes manifest files</p>
</li>
<li><p>Set up the CI/CD pipeline</p>
</li>
<li><p>Trigger the pipeline with a git push.</p>
</li>
</ol>
<h1 id="heading-directory-structure">Directory structure</h1>
<pre><code class="lang-bash">
├── Dockerfile
├── .gitlab-ci.yml
├── .k8s
│   ├── deployment.yaml
│   └── services.yaml
└── src
    ├── app.py
    └── requirements.txt
</code></pre>
<p>The application files and the Kubernetes configurations are respectively in the <strong>src</strong> and <strong>.k8s</strong> directories. and the Dockerfile and the GitLab-ci script are at the root of the directory.</p>
<h1 id="heading-set-up-the-application-code-and-the-dockerfile">Set up the application code and the Dockerfile</h1>
<p>Use whatever language or framework you want to create the application you want, the main thing is to have an application that you can containerize with a Dockerfile. Personally, I used a simple Python code that uses the Flask framework to create a web application that displays a "Hello, World!" message.</p>
<p>For the Dockefile, here is an example of what it could look like :</p>
<pre><code class="lang-apache"><span class="hljs-attribute">FROM</span> python:<span class="hljs-number">3</span>.<span class="hljs-number">9</span>-slim-buster

<span class="hljs-attribute">WORKDIR</span> /app

<span class="hljs-attribute">COPY</span> src/requirements.txt .
<span class="hljs-attribute">RUN</span> pip install --no-cache-dir -r requirements.txt

<span class="hljs-attribute">COPY</span> src/app.py .

<span class="hljs-attribute">CMD</span><span class="hljs-meta"> ["python", "./app.py"]</span>
</code></pre>
<p>I recommend you test your docker image locally before continuing.</p>
<h1 id="heading-define-cicd-gitlab-variables">Define CI/CD GitLab variables</h1>
<p>To connect to AWS, Kubernetes, and Docker Hub from GitLab CI, you need to define variables in the GitLab CI/CD pipeline. You can define these variables in the GitLab project settings under <em>CI/CD &gt; Variables</em>.</p>
<h2 id="heading-to-connect-to-aws">To connect to AWS</h2>
<ul>
<li><p><code>${AWS_ACCESS_KEY_ID}</code>: This variable contains the access key ID for the AWS account used to deploy the application.</p>
</li>
<li><p><code>${AWS_SECRET_ACCESS_KEY}</code>: This variable contains the secret access key for the AWS account used to deploy the application.</p>
</li>
<li><p><code>${AWS_DEFAULT_REGION}</code>: This variable contains the AWS region where the application will be deployed.</p>
</li>
</ul>
<h2 id="heading-the-variables-related-to-the-docker-hub-or-container-registry">The variables related to the Docker hub or container registry</h2>
<ul>
<li><p><code>${CI_REGISTRY_USER}:</code> This variable contains the username used to authenticate with Container Registry, it can be docker hub, GitLab registry or whatever you want</p>
</li>
<li><p><code>${CI_REGISTRY_PASSWORD}</code>: This variable contains the password used to authenticate with the Container Registry.</p>
</li>
<li><p><code>${CI_REGISTRY_IMAGE}</code>: This variable contains the name of the Docker image in the Container Registry.</p>
</li>
<li><p><code>${CI_REGISTRY_IMAGE_VERSION}</code>: This variable contains the version or tag of the Docker image in the Container Registry.</p>
</li>
</ul>
<h2 id="heading-the-configuration-file-to-access-kubernetes">The configuration file to access Kubernetes</h2>
<ul>
<li><code>${KUBECONFIG}</code>: This variable of type file contains the Kubernetes configuration file used to authenticate with the Kubernetes cluster. this file is available at this path <code>~/.kube/config</code> so when adding it to Gitlab make sure you choose <strong>File</strong> as the type</li>
</ul>
<h1 id="heading-set-kubernetes-manifest-files">Set Kubernetes manifest files</h1>
<p>Now it's time to define manifest files for Kubernetes deployments. As you would have seen above these files are located in the <strong>.k8s</strong> folder.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
  <span class="hljs-attr">strategy:</span> {}
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-app</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}</span>
        <span class="hljs-attr">resources:</span> {}
<span class="hljs-attr">status:</span> {}
</code></pre>
<p><strong>deployment.yaml</strong> defines a Kubernetes Deployment for the application named "my-app". The Deployment creates a single replica of the application and specifies the container image to be used for the application using the variable <code>${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}</code>. This variable references the Docker image built and pushed to a Docker registry.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-number">5000</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
<span class="hljs-attr">status:</span>
  <span class="hljs-attr">loadBalancer:</span> {}
</code></pre>
<p><strong>services.yaml</strong> defines Kubernetes Service for the "my-app" application. The Service exposes the application on port 80 and routes traffic to the listening port of our hello world app which is port 5000 on the application container. The file also specifies the labels to be used to identify the application in Kubernetes.</p>
<blockquote>
<p>Hint: To generate a quick template on which to base and write these configuration files and thus reduce the risk of error and save time, there is an option of the <code>kubectl</code> command that can be very useful.</p>
</blockquote>
<p>The <code>--dry-run=client -o yaml</code> option in the <code>kubectl</code> command generates a YAML representation of the Kubernetes resource that would be created or modified without actually creating or modifying the resource. Here is an example of how to generate our yaml file</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#deployment</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">create</span> <span class="hljs-string">deployment</span> <span class="hljs-string">&lt;app_name&gt;</span> <span class="hljs-string">\</span>
     <span class="hljs-string">--image=&lt;the_docker_image&gt;</span> <span class="hljs-string">\</span>
     <span class="hljs-string">--dry-run=client</span> <span class="hljs-string">-o</span> <span class="hljs-string">yaml</span> <span class="hljs-string">&gt;</span> <span class="hljs-string">deployment.yaml</span>

<span class="hljs-comment">#service</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">expose</span> <span class="hljs-string">deployment</span> <span class="hljs-string">&lt;app_name&gt;</span> <span class="hljs-string">\</span>
     <span class="hljs-string">--port=80</span> <span class="hljs-string">--target-port=5000</span> <span class="hljs-string">\</span>
     <span class="hljs-string">--dry-run=client</span> <span class="hljs-string">-o</span> <span class="hljs-string">yaml</span> <span class="hljs-string">&gt;</span> <span class="hljs-string">service.yaml</span>
</code></pre>
<p>This should generate the two yaml files that you will adjust to fit your use. You can also use the <code>--dry-run</code> option with the <code>kubectl</code> command to validate a Kubernetes YAML file without actually applying it to a cluster.</p>
<p>To use the <code>--dry-run</code> option to validate a Kubernetes YAML file, you can run the following command:</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">--dry-run=client</span> <span class="hljs-string">-f</span> <span class="hljs-string">&lt;yaml_file_path&gt;</span>
</code></pre>
<h1 id="heading-set-up-the-cicd-pipeline">Set up the CI/CD pipeline</h1>
<p>Now we can start setting up the GitLab script for our pipeline. The script in our case here will have 5 steps.</p>
<p>The script consists of the following stages:</p>
<ul>
<li><p><code>docker build</code>: Builds the Docker image and tags it with the registry information.</p>
</li>
<li><p><code>docker push</code>: Pushes the Docker image to the registry.</p>
</li>
<li><p><code>test</code>: Runs the Checkov tool to validate the Kubernetes deployment and service files.</p>
</li>
<li><p><code>deploy services</code>: Deploy the Kubernetes services to EKS using the <code>kubectl</code> command.</p>
</li>
<li><p><code>deploy the app</code>: Deploys the Kubernetes application to EKS using the <code>kubectl</code> command.</p>
</li>
</ul>
<p>Let's go a little further into the test phase, our test here is much more security oriented. we have integrated Checkov into the pipeline. <a target="_blank" href="https://www.checkov.io/">Checkov</a> is an open-source tool used for static code analysis of infrastructure-as-code (IAC) files. In this case, it will be used to perform security and compliance checks on the Kubernetes YAML files in the .k8s directory.</p>
<p>By running Checkov on the .k8s/deployments.yaml and .k8s/services.yaml files, the GitLab CI/CD pipeline can ensure that the Kubernetes resources being deployed meet the security and compliance requirements defined in the policies and rules being enforced by Checkov.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">test</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">deploy</span> <span class="hljs-string">services</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">deploy</span> <span class="hljs-string">app</span>


<span class="hljs-attr">Build docker image:</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">docker:stable</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span>
  <span class="hljs-attr">before_script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">-u</span> <span class="hljs-string">${CI_REGISTRY_USER}</span> <span class="hljs-string">-p</span> <span class="hljs-string">${CI_REGISTRY_PASSWORD}</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">-t</span> <span class="hljs-string">the-app</span> <span class="hljs-string">.</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">tag</span> <span class="hljs-string">the-app:latest</span> <span class="hljs-string">${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}</span> 

<span class="hljs-attr">Push to registry:</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">docker:stable</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span>
  <span class="hljs-attr">before_script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">-u</span> <span class="hljs-string">${CI_REGISTRY_USER}</span> <span class="hljs-string">-p</span> <span class="hljs-string">${CI_REGISTRY_PASSWORD}</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}</span>

<span class="hljs-attr">Test:</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">bridgecrew/checkov:latest</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">test</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">checkov</span> <span class="hljs-string">-d</span> <span class="hljs-string">.k8s/deployments.yaml</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">checkov</span> <span class="hljs-string">-d</span> <span class="hljs-string">.k8s/services.yaml</span>
  <span class="hljs-attr">allow_failure:</span> <span class="hljs-literal">true</span>



<span class="hljs-attr">Deploy services on EKS:</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">deploy</span> <span class="hljs-string">services</span>
  <span class="hljs-attr">before_script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">export</span> <span class="hljs-string">AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">export</span> <span class="hljs-string">AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">export</span> <span class="hljs-string">AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">cd</span> <span class="hljs-string">.k8s</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">kubectl</span> <span class="hljs-string">--kubeconfig</span> <span class="hljs-string">${KUBECONFIG}</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">services.yaml</span>
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">changes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">.k8s/services.yaml</span>


<span class="hljs-attr">Deploy app on EKS:</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">deploy</span> <span class="hljs-string">app</span>
  <span class="hljs-attr">before_script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">export</span> <span class="hljs-string">AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">export</span> <span class="hljs-string">AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">export</span> <span class="hljs-string">AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">cd</span> <span class="hljs-string">.k8s</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">kubectl</span> <span class="hljs-string">--kubeconfig</span> <span class="hljs-string">${KUBECONFIG}</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">deployment.yaml</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">kubectl</span> <span class="hljs-string">--kubeconfig</span> <span class="hljs-string">${KUBECONFIG}</span> <span class="hljs-string">rollout</span> <span class="hljs-string">status</span> <span class="hljs-string">deployments</span>
</code></pre>
<ul>
<li><p><strong>Docker build stage</strong>: Builds a Docker image for the application specified by the Dockerfile using the official <code>docker:stable</code> image. Before building the image, it logs in to the Docker registry using the username and password provided as GitLab CI environment variables <code>${CI_REGISTRY_USER}</code> and <code>${CI_REGISTRY_PASSWORD}</code>. After building the image, it tags it with <code>${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}</code>.</p>
</li>
<li><p><strong>Docker push stage</strong>: Pushes the Docker image created in the previous stage to the GitLab CI registry using the <code>docker push</code> command.</p>
</li>
<li><p><strong>Test stage</strong>: Runs <code>checkov</code> tool for Kubernetes manifest files deployment.yaml and services.yaml. <em>in this case, this stage is allowed to fail and does not prevent the pipeline from continuing</em>.</p>
</li>
<li><p><strong>Deploy services on EKS stage</strong>: Deploys the Kubernetes services specified in the <code>services.yaml</code> file to the Amazon Elastic Kubernetes Service (EKS) cluster. Before deploying, it sets the AWS credentials and region environment variables. It only runs the deployment if the <code>services.yaml</code> file has been modified since the last pipeline run.</p>
</li>
<li><p><strong>Deploy app on EKS stage</strong>: Deploys the Kubernetes deployment specified in the <code>deployment.yaml</code> file to the EKS cluster. Before deploying, it sets the AWS credentials and region environment variables. After deploying, it checks the rollout status of the deployment.</p>
</li>
</ul>
<p>To make sure that this script is valid, you can use a lint tool to validate the script, it will help to quickly fix the errors and save time. For my part, I used <a target="_blank" href="https://docs.gitlab.com/ee/integration/glab/"><strong>glab</strong></a> CLI tool which with the command <code>glab ci lint</code> allows validating the script to make sure that everything is correct.</p>
<h1 id="heading-trigger-the-pipeline-with-a-git-push">Trigger the pipeline with a git push.</h1>
<p>To trigger this GitLab CI pipeline, you need to commit and push the code changes to the GitLab repository that contains this GitLab CI script.</p>
<p>Once you have pushed the changes to the repository, GitLab CI automatically detects the changes and starts running the pipeline. The pipeline can also be triggered manually by clicking the "CI/CD" tab in the GitLab repository and clicking the "Run Pipeline" button.</p>
<p>Note that to run the pipeline successfully, you need to ensure that you have configured the necessary environment variables on GitLab CI, such as <code>CI_REGISTRY_USER</code>, <code>CI_REGISTRY_PASSWORD</code>, <code>AWS_ACCESS_KEY_ID</code>, <code>AWS_SECRET_ACCESS_KEY</code>, <code>AWS_DEFAULT_REGION</code>, and <code>KUBECONFIG</code>. These variables are used to log in to the GitLab registry, authenticate with AWS, and connect to the Kubernetes cluster.</p>
<p>You can find the files I used here on my <a target="_blank" href="https://github.com/davWK/AWS-EKS-Deployment-with-Gitlab-CI-CD-and-Checkov">GitHub</a> or <a target="_blank" href="https://gitlab.com/davwoglo/gitlab-ci_showcase.git">GitLab</a></p>
<p>I remain open to any contribution and suggestion to improve my work. Do not hesitate to let me know your contribution or suggestion by opening an issue.</p>
<p>Thanks for reading :)</p>
]]></content:encoded></item><item><title><![CDATA[AWS Cloud resume challenge]]></title><description><![CDATA[In this article I describe how things I worked on the AWS CRC project With some slight comparison with Google Cloud based on my own experience. By the way, I also wrote an article about the Google Cloud version of this project, you can have a look at...]]></description><link>https://blog.cloudvio.net/aws-cloud-resume-challenge</link><guid isPermaLink="true">https://blog.cloudvio.net/aws-cloud-resume-challenge</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<p>In this article I describe how things I worked on the AWS CRC project With some slight comparison with Google Cloud based on my own experience. By the way, I also wrote an article about the Google Cloud version of this project, you can have a look at it <a target="_blank" href="https://blog.davidwoglo.me/google-cloud-resume-challenge">here</a>.</p>
<p>It is better if you are new to all this Cloud related stuff, to take the  <a target="_blank" href="https://aws.amazon.com/certification/certified-cloud-practitioner/">AWS Cloud Practitioner certification</a> exam, as recommended in the <a target="_blank" href="https://cloudresumechallenge.dev/docs/the-challenge/aws/">CRC guide</a> as a first step. If you are not familiar with cloud environments or come from another cloud provider, taking this exam will help you validate your knowledge of the cloud and your familiarity with the different services of AWS (in case you come from another cloud provider). </p>
<p>Personally, I had to quickly complete the <a target="_blank" href="https://www.credly.com/badges/aec0902e-11fc-4056-82e3-0fba80d07dc3/linked_in_profile">AWS Cloud Practitioner Quest</a> to refresh my knowledge of AWS as I am quite familiar with the cloud and regarding AWS, I have already worked with some services. The quest is free; you can try it <a target="_blank" href="http://skillbuilder.aws/cloudquest?acq=sec&amp;sec=syq">here</a>.</p>
<p>Well now it's good for the intro, let's get to the heart of the matter</p>
<h1 id="heading-big-picture-of-deployments">Big picture of deployments</h1>
<p>This project is about creating a website which is hosted on amazon S3, then on the site display the number of visitors which is calculated with an AWS Lambda Function and stored in DynamoDB table, then automate all process, Website publication and the resources deployment  via a CI/CD pipeline. 
My resume page is available <a target="_blank" href="https://aws.davidwoglo.me/">https://aws.davidwoglo.me/</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664450893954/wsYTu5JJj.png" alt="AWS resume challenge.drawio.png" /></p>
<h1 id="heading-website">Website</h1>
<p>The first steps of the project consist in setting up a website, using a completely different approach than the traditional one. In a traditional way, we set up a machine or VM on which we install a web server (Apache, Nginx, or whatever you choose) then we upload the html/CSS files etc. and we proceed to some configuration to make the site available. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664459489134/rVz6_illh.png" alt="website.png" /></p>
<p>There I'm spared this server preparation task (and the underlying configs). I just uploaded the website files to <a target="_blank" href="https://aws.amazon.com/s3/">Amazon S3</a> bucket, the object storage service of AWS, the equivalent of Cloud Storage at Google Cloud, then I did some small configuration in two three clicks, just to let S3 know that I want to use this bucket to host a static website, and then we have a ready to use website. No need for a physical server or additional tools to install. <strong><em>This is the serverless approach</em></strong>, and this is what the rest of the project is based on, I didn't need to use any server or VM. </p>
<p>In order for my site to be accessible via a user friendly and secure HTTPS URL, it is necessary to manage the DNS and the ssl certificate configuration, so I used <a target="_blank" href="https://aws.amazon.com/certificate-manager/">AWS Certificate Manager</a> to obtain a certificate for my domain which ownership had to be verified by automatic mail, due to some problems related to my Custom Domain provider but it is recommended to use a CNAME record to do that. Then to route the DNS traffic I used <a target="_blank" href="https://aws.amazon.com/route53/">Amazon Route 53</a>, and the distribution of website content is sped up by <a target="_blank" href="https://aws.amazon.com/cloudfront/">Amazon CloudFront</a>(CDN Service). All these configurations were done manually in a separate way and tied together at the end to make things work. </p>
<p>At this point let's make a small comparison with how Google Cloud handles it. At Google Cloud all this can be included in the creation of a Load Balancer where we will just have to activate the automatic management of SSL for HTTPS and CDN for content caching. </p>
<h1 id="heading-counting-website-visitors">Counting website visitors</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664462364922/kiaBOoy_K.png" alt="Visitors_count.png.png" /></p>
<p>My web page include a visitor counter that displays how many people have accessed the site.To do this, I created an  <a target="_blank" href="https://aws.amazon.com/lambda/">AWS Lambda</a> function, a <a target="_blank" href="https://aws.amazon.com/dynamodb/">DynamoDB</a> table and a REST API. On one side, I wrote a python code that is executed by Lambda, the python function is to get the current number of visitors stored in DYnamoDB and increment it by 1 each time a visitor access my page,  on the other side, I added a javaScript code to the files of my site. The job of this script is to get the number of visitors present in the Dynamo table and display it on my page, the communication between the JS code and the database is done via a REST API that I set up using <a target="_blank" href="https://aws.amazon.com/api-gateway/">Amazon API Gateway</a>. </p>
<p>This is the part that gave me headaches when I was doing the project on Google Cloud. I didn't use an API Gateway there (because honestly, I didn't know), so I used an open source <a target="_blank" href="https://github.com/GoogleCloudPlatform/functions-framework-python">Functions Framework for Python</a> in which I used client library API to communicate with Cloud Firestore which is the equivalent of DynamoDB. </p>
<h1 id="heading-automation-cicd-iac-source-control">Automation (CI/CD, IaC, Source Control)</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664469010300/Evzog_TGJ.png" alt="IAC.png" /></p>
<p>To accelerate and simplify the update of my deployments, whether it is the website (frontend) or the underlying resources (backend), I need to set up a CI/CD pipeline. A CI/CD pipeline is a series of steps that must be performed in order to deliver a new version of software. 
Well for the website, it's ok, we can manage it as software, (it's composed of files as a software is), the question you could ask yourself is : What about the cloud resources in the background ? Since they are not files, right? This is where Infrastructure as Code (IaC) comes in. But before talking about it, let's see how the CI/CD for the front end was set up. 
I created a source control  repository on Github where I put the website files, then I wrote a workflow file that instructs <a target="_blank" href="https://docs.github.com/en/actions">Github Action</a> on how to update my website every time I make a push. 
Now let's talk about the <a target="_blank" href="https://www.hashicorp.com/resources/what-is-infrastructure-as-code/">Infrastructure as Code</a> stuff, i.e. how to manage provision resources through machine-readable definition files, rather than use an interactive configuration as it is traditionally done.
Well I used <a target="_blank" href="https://www.terraform.io/docs">Terraform</a> to define the DynamoDB table, the API Gateway, the Lambda function configurations in a template and deploy them with Terraform CLI. You see, now that we can also manage our infrastructure as a software, we can integrate it in a CICD to accelerate the process of deployment and update of the infrastructure resources. 
I proceeded in the same way as the website to set up the Backend pipeline, just that here the Github Actions workflow file is a bit more complex.</p>
<p>You can access my frontend repository <a target="_blank" href="https://github.com/davWK/cloud-resume-challenge-AWS">here</a> and the backend one <a target="_blank" href="https://github.com/davWK/cicd-cloud-resume-challenge-AWS">here</a></p>
<p>Well, here is how things went during this project. Thanks for reading :) 
Please check below for some useful resources.</p>
<h1 id="heading-useful-resources">Useful resources</h1>
<ul>
<li><a target="_blank" href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started-cloudfront-overview.html">Use an Amazon CloudFront distribution to serve a static website</a></li>
<li><a target="_blank" href="https://www.cloudflare.com/learning/dns/what-is-dns/">What is DNS? | How DNS works</a></li>
<li><a target="_blank" href="https://serverlessland.com/">Serverless Land</a></li>
<li><a target="_blank" href="https://www.dynamodbguide.com/">DynamoDB Guide</a></li>
<li><a target="_blank" href="https://theultimateapichallenge.com/">API Projects</a></li>
<li><a target="_blank" href="https://wiki.python.org/moin/BeginnersGuide/NonProgrammers">Python</a></li>
<li><a target="_blank" href="https://cors.serverlessland.com/">Amazon API Gateway CORS Configurator</a></li>
<li><a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Language_Overview">Java Script</a></li>
<li><a target="_blank" href="https://www.freecodecamp.org/news/how-to-make-api-calls-with-fetch">API Calls</a></li>
<li><a target="_blank" href="https://developer.hashicorp.com/terraform/tutorials/automation/github-actions">Automate Terraform with GitHub Actions</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Google Cloud Resume Challenge]]></title><description><![CDATA[How did it all start?
It all started after I passed my certification. I thought to myself, this is it, you've achieved your first short term goal of getting into the Cloud world, something you've been studying tirelessly for in recent months. But I m...]]></description><link>https://blog.cloudvio.net/google-cloud-resume-challenge</link><guid isPermaLink="true">https://blog.cloudvio.net/google-cloud-resume-challenge</guid><dc:creator><![CDATA[David WOGLO]]></dc:creator><pubDate>Thu, 25 Sep 2025 22:06:51 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-how-did-it-all-start">How did it all start?</h2>
<p>It all started after I passed my certification. I thought to myself, this is it, you've achieved your first short term goal of getting into the Cloud world, something you've been studying tirelessly for in recent months. But I must admit I thought that once certified the offers would fall left and right. Haha !!!  it's not the case and it wasn't going to happen, we can't make you an offer like that just because you are certified inh, Certified people, we can find all over the place lately. Certification at best is just the key to open the door of Cloud house 😀, it just offers the opportunity to be consulted, to have a little bit of visibility or I would say consideration. But to get a job (like settling down in the Cloud house 😀), to get offers as I thought, you have to justify your know-how and your knowledge  in the Cloud by practical, relevant and convincing experiences. And I can see from afar this question that we juniors often ask ourselves "How can we get experience if we are not given the opportunity to join a company where we can develop these experiences? 🤔" Honestly (at least for the IT world) we can do without it, we can justify our skills, our experiences, our knowledge by projects, challenges, that make us go through the pitfalls, the mistakes, the solutions, the obstacles, the dark days, the light that make us build and help us to polish a profile well adapted and that constitutes a solution to the needs and problems of companies.
It is with this in mind that after watching <a target="_blank" href="https://youtu.be/vviS_fHnJu4">a video</a> of Forrest Brazeal, I came across the <a target="_blank" href="https://cloudresumechallenge.dev/">Cloud Resume Challenge</a>.</p>
<h2 id="heading-what-is-the-cloud-resume-challenge">What is The Cloud Resume Challenge ?</h2>
<p>The Cloud Resume Challenge is a hands-on project designed to help bridge the gap from Cloud certification to Cloud job. It incorporates many of the skills that real Cloud and DevOps engineers use in their daily work. The Cloud Resume Challenge is a multiple-step (It's roughly 16 steps) resume project, from the creation of a website to the implementation of a CI/CD pipeline, which helps build and demonstrate skills fundamental to pursuing a career as a Cloud Engineer. </p>
<h2 id="heading-certification">Certification</h2>
<p>The first step of the challenge is to get a Cloud certification of beginner or associate level. I don't think it's an obligation, but from my personal point of view I strongly recommend it, especially when you are a beginner or if you have a non-technical background. it will allow you to acquire and validate the fundamentals that you need to pursue a career in the cloud. For me, I had to pass the associate level certification of Google Cloud, <a target="_blank" href="https://cloud.google.com/certification/cloud-engineer">the Google Cloud Associate Cloud engineer</a> thanks to the <a target="_blank" href="https://andela.com/alc/google-africa-developer-scholarship-gads/">Google Africa Developer Scholarship</a>GADS 2021 program</p>
<h2 id="heading-the-architecture">The architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660730974683/8P7qlS59o.png" alt="ArchFinal.png" /></p>
<h2 id="heading-website">Website</h2>
<h3 id="heading-setup">Setup</h3>
<p>To go fast and get to the heart of things, I didn't have to write HTML files from scratch, I rather took a free template that I modified and adapted to my needs with the little and old knowledge I have in HTML and CSS. Once the website files are ready, it must be hosted, right? Since the challenge is based on a serverless spirit, the site is not hosted on a server but on the Object storage service of Google Cloud.  By uploading website contents as files to Cloud Storage, we can host  static websites on buckets. <a target="_blank" href="https://cloud.google.com/storage/docs/hosting-static-website">See how to do it</a>.</p>
<h3 id="heading-make-the-site-accessible-via-a-domain-name-and-secure-it">Make the site accessible via a domain name and  secure it</h3>
<p>Usually to make a site accessible via a domain name, you must map the IP address of the server hosting the site to a domain name. But in our case there is no server in play that has an IP address, what can I do? Fortunately we have the managed HTTP Load Balancer on Google Cloud which can allow us to set up a LoadBalancer with a public IP address and point this address to a Bucket that hosts a static site. This solves the problem of IP address, we now have an IP address to access our website, we can now link a domain name to this IP address and access our website as usual.
But with only that, the site is only accessible in HTTP, for more security the site must use HTTPS, again thanks to Google Cloud's LB we can set this up, without too much trouble, Google Cloud's HTTPS LB has an option for automatic management of SSL certificates that can be activated during the implementation of the LB. </p>
<blockquote>
<p>But in a corporate environment it is better that the certificates are managed and controlled by you
For the domain name I used Namecheap, however you can use Cloud DNS from Google Cloud to do this</p>
</blockquote>
<h2 id="heading-website-visitors-count">Website visitors count</h2>
<p>This part of the challenge consists in having the number of visitors who have visited the site.
For this I wrote a JavaScript code in addition to the website files on one side, and on the other side a function in python which is hosted on Google Cloud Function. The purpose of the javascript is to call the execution of the python function which in turn is executed, calculates and stores the result in the Document NoSQL database,  Firestore  and returns the total number of visitors in Json format to the JS code which is  responsible for displaying it on the site, This operation is done each time there is a visit to the page. The python function in this case serves as an API to avoid having the JS code communicate directly with the database</p>
<h2 id="heading-test">Test</h2>
<p>To ensure that the python code works as it should and returns the expected result, it was necessary to write python a test using the unittest module provided by the Python standard library  which help write and run tests for Python code. Since the JS code needs to get the data concerning the number of visitors in json format, the python test here verifies if the data returned by the function are indeed in json format</p>
<p><em>Well I think we can take a coffee break ☕</em></p>
<blockquote>
<p>So far everything that has been done has been manually clicking around in the Google Cloud console. In the following stages, the methods and techniques used by DevOps Engineers will intervene, in particular <a target="_blank" href="https://www.hashicorp.com/resources/what-is-infrastructure-as-code/">Infrastructure as Code</a>, <a target="_blank" href="https://about.gitlab.com/topics/gitops/">GitOps</a> and the <a target="_blank" href="https://about.gitlab.com/topics/ci-cd/">CI/CD</a> (I admit it's not that hard in this challenge but it's the same operating mode)</p>
</blockquote>
<h2 id="heading-infrastructure-as-code">Infrastructure as Code</h2>
<p>Here I discovered a rather brilliant thing. Before telling you, know that it is planned here to define resources in a Terraform template and deploy them using the Terraform CLI. But instead of getting into another hassle of rewriting everything I've done so far in Terraform, testing it, destroying it, aligning it to what exists and all that, I discovered a <a target="_blank" href="https://cloud.google.com/sdk/gcloud/reference/beta/resource-config/bulk-export">Google Cloud tool</a> that I can use to generate Terraform code for resources in a project, folder, or organization. 
The <code>gcloud beta resource-config bulk-export --resource-format=Terraform</code> command exports resources currently configured in the project, folder, or organization and prints them to the screen in HCL code format.
(<em>Of course that's what I did</em>)
So what's left for me to do is manage these Terraform templates with a source control system, so I can integrate it into the pipeline later. aSo what's left for me to do is manage those templates with Github, so I can integrate it into the pipeline later. and when I need to make changes to my Cloud resources , I just have to adjust a few lines and that's it.</p>
<p><em>But I point out, before using this tool, it is necessary to already have Terraform bases. <a target="_blank" href="https://learn.hashicorp.com/collections/terraform/gcp-get-started">Here's a great guide to getting started</a></em></p>
<h2 id="heading-cicd">CI/CD</h2>
<p>To setup the CI/CD pipeline or I would rather say the pipelines, since there are two, one for the frontend and another for the backend. I created a Github repository for the frontend (i.e. the website files), and thanks to Cloud Build I automate the  update of the website which is triggered each time I make a push. On the other side for the Backend (i.e. the infrastructure resources that are defined in a Terraform template), I also created a Github repo for this purpose. So when I make changes to .tf files my Cloud resources are automatically updated, whether it's deletion, change or addition, all of this is done automatically without any intervention on my part. </p>
<h2 id="heading-in-the-end">In the end ...</h2>
<p>Well, that's an overview of how I managed to complete this challenge, which by the way is very rewarding, allowed me to learn and discover really useful things, to identify my weaknesses and how to strengthen them and above all has allowed me to have relevant experience in the cloud. and I don't intend to stop there 😉
Of course some of the steps were not fun, but thanks to the help of some of my SWE connections, in particular <a target="_blank" href="https://vincentbakpatina.me/">Vincent
Bakpatina</a>, <a target="_blank" href="https://www.linkedin.com/in/ayao-corneille-allogbalo/">Corneille ALLOGBALO</a>, <a target="_blank" href="https://www.linkedin.com/in/reussiteforever/">Abdel-Khafid ATOKOU</a> and <a target="_blank" href="https://www.linkedin.com/in/samtou-assekouda-b2a78b174/">Samtou Assekouda</a>  I was able to overcome.</p>
<h3 id="heading-here-are-some-useful-resources">Here are some useful resources 👇🏾</h3>
<p>Here is <a target="_blank" href="https://github.com/davWK/cloud-resume-challenge.git">my Github repo for the frontend</a>, concerning the backend, since it contains some sensitive information I have not made it public</p>
<p><a target="_blank" href="https://cloud.google.com/storage/docs/hosting-static-website">Host a static website on Google Cloud Storage</a>
<a target="_blank" href="https://github.com/googleCloudPlatform/functions-framework-python">Functions Framework for Python</a>
<a target="_blank" href="https://cloud.google.com/community/tutorials/automated-publishing-cloud-build">Automated static website publishing with Cloud Build</a>
<a target="_blank" href="https://cloud.google.com/cdn/docs/invalidating-cached-content#gcloud">Invalidate cached content</a> 
<a target="_blank" href="https://cloud.google.com/architecture/managing-infrastructure-as-code?utm_source=youtube&amp;utm_medium=unpaidsoc&amp;utm_campaign=CDR_mao_gcp_ce93fpqrkck_ServerlessExpeditions_040821&amp;utm_content=description">Managing infrastructure as code with Terraform, Cloud Build, and GitOps</a> 
<a target="_blank" href="https://cloud.google.com/docs/terraform/resource-management/export">Export your Google Cloud resources into Terraform format</a></p>
]]></content:encoded></item></channel></rss>