<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Nana's blog]]></title><description><![CDATA[I am Nana Quayson. A cloud engineer and tech consultant. I have worked with both small and large enterprise clients in their journey to the AWS cloud platform.]]></description><link>https://nquayson.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 15:54:04 GMT</lastBuildDate><atom:link href="https://nquayson.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Build a serverless hit counter on AWS using Terraform]]></title><description><![CDATA[In this easy-to-follow guide, you will learn how to create a hit counter for your website. You will need to be familiar with AWS serverless services and have a basic understanding of Terraform to be able to easily follow this guide. At the end of thi...]]></description><link>https://nquayson.com/build-a-serverless-hit-counter-on-aws-using-terraform</link><guid isPermaLink="true">https://nquayson.com/build-a-serverless-hit-counter-on-aws-using-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[APIs]]></category><dc:creator><![CDATA[Nana Quayson]]></dc:creator><pubDate>Tue, 24 Oct 2023 12:15:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1697425006678/49d7ac78-366b-40d5-b377-cec2a0530f9e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this easy-to-follow guide, you will learn how to create a hit counter for your website. You will need to be familiar with AWS serverless services and have a basic understanding of Terraform to be able to easily follow this guide. At the end of this guide, you will build a hit counter API that you can integrate into your web pages. See a working demo here. <a target="_blank" href="https://demos.nquayson.com/hitcounter/index.html">https://demos.nquayson.com/hitcounter/index.html</a></p>
<p>Why build a hit counter from scratch on AWS when other services exist for the purpose?</p>
<p>Consider this as a fun little hands-on project that hones your AWS, IaC and Python programming skills!</p>
<h1 id="heading-overview">Overview</h1>
<p>Here is a quick overview of all the services that will be deployed in this project:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695786668692/15561aa1-895f-43b0-9a5c-7cefad86979f.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>S3 bucket for hosting the static assets of the web application</p>
</li>
<li><p>ACM certificate for enabling secure communication over HTTPS</p>
</li>
<li><p>Cloudfront for serving the static assets across edge locations for our global user base</p>
</li>
<li><p>DynamoDB will be our NoSQL database choice</p>
</li>
<li><p>Lambda function for running our backend logic and eventually getting and setting the values in the database.</p>
</li>
<li><p>Lambda function URL will be used rather than an API Gateway</p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p>An AWS account and some AWS experience</p>
</li>
<li><p>Some Infrastructure as Code (IaC) knowledge</p>
</li>
<li><p>An <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html">AWS CLI profile</a> configured for your development environment to access your AWS account</p>
</li>
</ul>
<h2 id="heading-fork-the-repository">Fork the repository</h2>
<p>Before we start, <a target="_blank" href="https://github.com/nquayson/aws-hitcounter/fork">make your fork</a> of the repository containing the full source code. This will be your own copy giving you the flexibility to experiment and explore.</p>
<h2 id="heading-lambda-function-logic">Lambda function logic</h2>
<p>The handler function will be the entry point when lambda is invoked. The handler calls the update_hit() function which updates the 'hit_count' attribute of a specific item (key = "1") in the DynamoDB table. The <code>UpdateExpression</code> parameter defines the update operation to perform. Here, it uses the 'ADD' action to increment the value of the 'hit_count' attribute by 1. The updated value of 'hit_count' is returned and we expect that as a string. The string zfill() method is used to pad a copy of this string with zeroes on the left. Resulting in the transformation "234" -&gt; "00234"</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="e35eceb97db7432915cc97a2f89f3dd9"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/nquayson/e35eceb97db7432915cc97a2f89f3dd9" class="embed-card">https://gist.github.com/nquayson/e35eceb97db7432915cc97a2f89f3dd9</a></div><p> </p>
<h1 id="heading-terraform-provider">Terraform provider</h1>
<p>Time to write some Terraform for provisioning our resources. First, we declare the provider block in <code>provider.tf</code> file. The default_tags mean that all resources created with this provider will be tagged with the provided map.</p>
<pre><code class="lang-apache"><span class="hljs-attribute">provider</span> <span class="hljs-string">"aws"</span> {
  <span class="hljs-attribute">profile</span> = <span class="hljs-string">"YOUR_CLI_PROFILE"</span>
  <span class="hljs-attribute">region</span>  = <span class="hljs-string">"us-east-1"</span>
  <span class="hljs-attribute">default_tags</span> {
    <span class="hljs-attribute">tags</span> = {Name = var.name}
  }
}
</code></pre>
<h2 id="heading-variables">Variables</h2>
<p>We define some variables we need in <code>variables.tf</code>. We also declare the full_name local variable, which is a concatenation of <code>name</code> and <code>env</code> variables. This variable will be mainly used as the (aws-end) name of major resources. This way we can easily replicate the project for a different environment, such as another region within the same account, without worrying about name collisions.</p>
<pre><code class="lang-apache"><span class="hljs-attribute">variable</span> <span class="hljs-string">"name"</span> {
  <span class="hljs-attribute">description</span> = <span class="hljs-string">"Name of application"</span>
  <span class="hljs-attribute">default</span> = <span class="hljs-string">"demo"</span>
} 
<span class="hljs-attribute">variable</span> <span class="hljs-string">"env"</span> {
  <span class="hljs-attribute">description</span> = <span class="hljs-string">"Environment name"</span>
  <span class="hljs-attribute">default</span> = <span class="hljs-string">"env"</span>
}
<span class="hljs-attribute">locals</span> {
  <span class="hljs-attribute">full_name</span> = <span class="hljs-string">"${var.env}-${var.name}"</span>
}
</code></pre>
<h2 id="heading-lambda-function">Lambda Function</h2>
<p>The rest of the code, except output, is defined in <code>main.tf</code>. Here we provision the lambda function that runs on Python3.8, using a deployment package in a zip archive. We also define an IAM role that the lambda function can assume. The role provides AWSLambdaBasicExecutionRole and an inline policy that allows the lambda function read and write access to the DynamoDB database.</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">resource</span> <span class="hljs-string">"aws_lambda_function"</span> <span class="hljs-string">"myfunc"</span> {
  <span class="hljs-attribute">filename</span>         = data.archive_file.zip.output_path
  source_code_hash = data.archive_file.zip.output_base64sha256
  function_name    = local.full_name
  description      = <span class="hljs-string">"Hit counter demo"</span>
  role             = aws_iam_role.iam_for_lambda.arn
  handler          = <span class="hljs-string">"func.handler"</span> <span class="hljs-comment">#filename.handlermethod</span>
  runtime          = <span class="hljs-string">"python3.8"</span>
  environment {
    <span class="hljs-attribute">variables</span> = {
      <span class="hljs-attribute">TABLE_NAME</span> = aws_dynamodb_table.hitcount.name
    }
  }
}
resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"iam_for_lambda"</span> {
  <span class="hljs-attribute">name</span> = local.full_name
  assume_role_policy = data.aws_iam_policy_document.allow-lambda-assume.json
  managed_policy_arns = [
    <span class="hljs-string">"arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"</span>
  ]
  inline_policy {
    <span class="hljs-attribute">name</span>   = <span class="hljs-string">"ddbreadwrite"</span>
    policy = data.aws_iam_policy_document.ddbreadwrite.json
  }
}
data <span class="hljs-string">"archive_file"</span> <span class="hljs-string">"zip"</span> {
  <span class="hljs-attribute">type</span>        = <span class="hljs-string">"zip"</span>
  source_dir = <span class="hljs-string">"<span class="hljs-variable">${path.module}</span>/lambda/"</span>
  output_path = <span class="hljs-string">"<span class="hljs-variable">${path.module}</span>/packedlambda.zip"</span>
}
data <span class="hljs-string">"aws_iam_policy_document"</span> <span class="hljs-string">"allow-lambda-assume"</span> {
  <span class="hljs-section">statement</span> {
    <span class="hljs-attribute">effect</span>  = <span class="hljs-string">"Allow"</span>
    actions = [<span class="hljs-string">"sts:AssumeRole"</span>]
    principals {
      <span class="hljs-attribute">identifiers</span> = [<span class="hljs-string">"lambda.amazonaws.com"</span>]
      type        = <span class="hljs-string">"Service"</span>
    }
  }
}
</code></pre>
<h2 id="heading-database">Database</h2>
<p>The database is DynamoDB with a single table; an <code>id</code> and <code>hitcount</code> columns. We can store as many website pages as ids and their corresponding hit counts in the <code>hitcount</code> column.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692598721505/a24289ab-130b-4f43-9509-2c8f40a4f565.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-nginx"><span class="hljs-attribute">data</span> <span class="hljs-string">"aws_iam_policy_document"</span> <span class="hljs-string">"ddbreadwrite"</span> {
  <span class="hljs-section">statement</span> {
    <span class="hljs-attribute">sid</span>       = <span class="hljs-string">"ddbreadwrite"</span>
    effect    = <span class="hljs-string">"Allow"</span>
    actions   = [<span class="hljs-string">"dynamodb:Scan"</span>, <span class="hljs-string">"dynamodb:PutItem"</span>, <span class="hljs-string">"dynamodb:GetItem"</span>, <span class="hljs-string">"dynamodb:UpdateItem"</span>]
    resources = [<span class="hljs-string">"*"</span>]
  }
}
resource <span class="hljs-string">"aws_dynamodb_table"</span> <span class="hljs-string">"hitcount"</span> {
  <span class="hljs-attribute">name</span>           = local.full_name
  billing_mode   = <span class="hljs-string">"PAY_PER_REQUEST"</span>
  hash_key       = <span class="hljs-string">"id"</span>
  attribute {
    <span class="hljs-attribute">name</span> = <span class="hljs-string">"id"</span>
    type = <span class="hljs-string">"S"</span>
  }
}
</code></pre>
<h2 id="heading-the-api">The API</h2>
<p>A lambda function URL will serve as the https endpoint to our lambda function. This is a quick way to create an API, but usually not suitable for production use.</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">resource</span> <span class="hljs-string">"aws_lambda_function_url"</span> <span class="hljs-string">"url1"</span> {
  <span class="hljs-attribute">function_name</span>      = aws_lambda_function.myfunc.function_name
  authorization_type = <span class="hljs-string">"NONE"</span>
  cors {
    <span class="hljs-attribute">allow_credentials</span> = <span class="hljs-literal">true</span>
    allow_origins     = [<span class="hljs-string">"*"</span>]
    allow_methods     = [<span class="hljs-string">"*"</span>]
    allow_headers     = [<span class="hljs-string">"date"</span>, <span class="hljs-string">"keep-alive"</span>]
    expose_headers    = [<span class="hljs-string">"keep-alive"</span>, <span class="hljs-string">"date"</span>]
    max_age           = <span class="hljs-number">86400</span>
  }
}
</code></pre>
<p>Here is the full <a target="_blank" href="https://github.com/nquayson/aws-hitcounter"><strong>GitHub code</strong></a> for the project.</p>
<h1 id="heading-lets-run-terraform"><strong>Let's run Terraform</strong></h1>
<p>To run Terraform we go through the workflow steps:</p>
<pre><code class="lang-bash">terraform init
terraform plan
terraform apply
</code></pre>
<h2 id="heading-outputs">Outputs</h2>
<p>Our resources should now be created in the AWS account. We see a terminal output for the new function URL endpoint. Making a get request to this endpoint should return the hit count value.</p>
<pre><code class="lang-bash">Changes to Outputs:
  + function_endpoint = <span class="hljs-string">"https://4pxyxy0e.execute-api.us-east-2.amazonaws.com/prod/api"</span>
</code></pre>
<h1 id="heading-whats-next-embedding-the-counter-api-into-a-webpage">What's next: Embedding the counter API into a webpage</h1>
<p>Now that our API is successfully deployed, we can integrate it into a webpage by writing some Javascript in our frontend.</p>
<pre><code class="lang-javascript">url = <span class="hljs-string">"https://4pxyxy0e.execute-api.us-east-2.amazonaws.com/prod/api"</span>
<span class="hljs-keyword">let</span> mcount = <span class="hljs-string">""</span>
fetch(url)
  .then(<span class="hljs-function"><span class="hljs-params">response</span> =&gt;</span> response.text())
  .then(<span class="hljs-function">(<span class="hljs-params">response</span>) =&gt;</span> {
      <span class="hljs-built_in">console</span>.log(response)
      mcount = response
      <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; mcount.length; i++) {
        <span class="hljs-keyword">var</span> newSpan = <span class="hljs-built_in">document</span>.createElement(<span class="hljs-string">'span'</span>);
        newSpan.innerHTML = mcount[i];
        <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">'mydiv'</span>).appendChild(newSpan);
      }
  })
  .catch(<span class="hljs-function"><span class="hljs-params">err</span> =&gt;</span> <span class="hljs-built_in">console</span>.log(err))
</code></pre>
<p>The code fetches data from our API endpoint via a GET request and then appends each character of the text response to an HTML span on the webpage.</p>
<p>See the full <a target="_blank" href="https://demos.nquayson.com/hitcounter/index.html">working demo here</a>.</p>
<p>Let me know your thoughts in the comment section.</p>
]]></content:encoded></item><item><title><![CDATA[Leadership development journey: 3 reflections from my TechStar experience]]></title><description><![CDATA[Fixed mindset: Ability is predetermined
Growth mindset: Learning takes time and effort.

TechStar is the global flagship Accenture program targeting high-performing consultants in Technology for developing leaders. This year, approximately 1% of high...]]></description><link>https://nquayson.com/leadership-development-journey-3-reflections-from-my-techstar-experience</link><guid isPermaLink="true">https://nquayson.com/leadership-development-journey-3-reflections-from-my-techstar-experience</guid><category><![CDATA[leadership]]></category><category><![CDATA[Mindset]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Nana Quayson]]></dc:creator><pubDate>Mon, 31 Jul 2023 10:58:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Wyx2vBv8WU0/upload/293fd4e15f946a1a22b6574540c7d2ae.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Fixed mindset: Ability is predetermined</p>
<p>Growth mindset: Learning takes time and effort.</p>
</blockquote>
<p>TechStar is the global flagship Accenture program targeting high-performing consultants in Technology for developing leaders. This year, approximately 1% of high-performing analysts and consultants in the Technology business were recognized and nominated for the program. It is not uncommon to find highly technically skilled and accomplished individuals in Tech teams. But how about leadership skills? I accepted my nomination into the program and couldn't wait to explore, network and engage with colleagues, luminaries and Subject Matter Experts. I am happy to share some of the pivotal moments and lessons I learned along the unique 6-month journey.</p>
<h1 id="heading-nurturing-a-growth-mindset">Nurturing a Growth mindset</h1>
<p>I have come to embrace the belief that my abilities and skills can be further developed through dedication, effort and a commitment to continuous improvement. During the journey, I had to complete a challenge focused on cultivating some key mindsets. One of which was the growth mindset.</p>
<p>This mindset has been especially significant in light of the challenges I faced during my initial years of living in the US, particularly in terms of communication. Many did not understand how drastic cultural changes had an impact on one's understanding, listening and speech. I used every piece of feedback I received as a valuable input for growth.</p>
<p>TechStar provided a supportive environment and resources necessary for me to reflect on these past experiences and channel my learnings for personal and professional development.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690161542281/4c2178e1-16e7-4dde-88e4-0c8a034f56c1.jpeg" alt class="image--center mx-auto" /></p>
<h1 id="heading-managing-priorities">Managing priorities</h1>
<p>One of the recommended books for the journey, that I had the privilege of reading and discussing during BookClub, was “The Phoenix Project.” Bill Palmer's experience in the novel resonated with me. As the new VP, he has to navigate the demands of several departments and make strategic decisions. His team has to balance their work of ongoing maintenance tasks with any urgent production issues that arise.</p>
<p>In my role as a cloud consultant, I often find myself faced with the challenge of managing priorities. Balancing day-to-day tasks while also managing a small team can be challenging. It can be overwhelming at times, especially when the demands from various stakeholders and departments seem to pull in different directions. I have found that having a structured approach and using tools like Jira has helped make tasks more visible and organized.</p>
<p>Additionally, delegating tasks to teammates who may have more bandwidth, and based on individual strengths plays a crucial role in managing priorities!</p>
<h1 id="heading-sustainability">Sustainability</h1>
<p>My passion for nature and commitment to making the Earth a better place have been further reinforced. I learned that different programming languages have different energy efficiency scores, with some being more 'green' than others. This has triggered a shift in how I think about selecting a programming language for a project. I also question the necessity of certain infrastructure choices. I am learning to strike the correct balance between reliability and sustainability.  I have become more conscious of avoiding unnecessary resource consumption, thereby contributing to a more sustainable technology landscape.</p>
<h1 id="heading-final-thoughts">Final thoughts</h1>
<p>In all, the learning sessions covered a wide range of topics, including the art of storytelling, health and well-being, inclusion and diversity, and cutting-edge technology domains like data and AI, cloud, security, sustainability, and enterprise and industry technologies. I am grateful for the recognition, inspiration and mentoring that I gained from participating in this flagship program. I was nominated for a purpose and I plan to use my learnings as guiding principles to shape my career growth. I am excited to embrace the challenges that lie ahead as I step into the next phase of my leadership development journey.</p>
]]></content:encoded></item><item><title><![CDATA[I passed the AWS SA Pro - time management was the biggest hurdle]]></title><description><![CDATA[After I completed the 3-hour long exam, to say I was tired is an understatement. I was totally worn out! On top of that, I didn't receive the PASS / FAIL preliminary evaluation that typically appears on the screen at the end of every AWS exam I have ...]]></description><link>https://nquayson.com/i-passed-the-aws-sa-pro-time-management-was-the-biggest-hurdle</link><guid isPermaLink="true">https://nquayson.com/i-passed-the-aws-sa-pro-time-management-was-the-biggest-hurdle</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[solutionarchitect]]></category><dc:creator><![CDATA[Nana Quayson]]></dc:creator><pubDate>Sat, 17 Dec 2022 19:22:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1671002267274/iVOSX2GDh.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>After I completed the 3-hour long exam, to say I was tired is an understatement. I was totally worn out! On top of that, I didn't receive the PASS / FAIL preliminary evaluation that typically appears on the screen at the end of every AWS exam I have taken.</p>
</blockquote>
<h1 id="heading-experience-counts">Experience counts</h1>
<p>When AWS recommends the exam for candidates with 2+ years of <em>hands-on</em> experience building solutions on their cloud, they aren't kidding. The exam tests your competence over very practical scenarios without a lot of time to think through the options. There is far more depth to the scenarios here than at the associate level.</p>
<p>There are a lot of areas here that are outside the solutions architect-associate level. For example, knowledge of designing a hybrid architecture by integrating AD and other third-party IDPs with AWS Direct Connect or VPN.</p>
<p>In my current role at work, I have been involved in managing hundreds of AWS accounts for a large client. I used AWS Organizations, Control tower, and SCPs for account management tasks, vending and governing accounts at scale. I used IaC and CICD tools for remediating changes across environments and building / replicating environments.</p>
<h1 id="heading-exam-areas">Exam areas</h1>
<p>My knowledge from taking the SysOps Administrator and Developer associate exams were useful as several concepts were within the scope of the AWS SA Pro exam.</p>
<blockquote>
<p>CloudWatch (Logs, metrics, events etc), SSM, Config, Secrets Manager, Service Catalog</p>
</blockquote>
<p>Additional topics, which I totally enjoyed being tested on were migration tools and services:</p>
<blockquote>
<p>Data and application migration services such as AWS DataSync, AWS Transfer Family, AWS Snow Family, S3 Transfer Acceleration, AWS Application Discovery Service, AWS Application Migration Service, AWS Server Migration Service</p>
</blockquote>
<p>AWS provides an exam guide that lists all services and tools in the scope of this exam: <a target="_blank" href="https://d1.awsstatic.com/training-and-certification/docs-sa-pro/AWS-Certified-Solutions-Architect-Professional_Exam-Guide.pdf">https://d1.awsstatic.com/training-and-certification/docs-sa-pro/AWS-Certified-Solutions-Architect-Professional_Exam-Guide.pdf</a></p>
<h1 id="heading-study-plan">Study plan</h1>
<p>I used the Acloudguru course which was free through my workplace subscription. Other recommended courses include <a target="_blank" href="https://learn.cantrill.io/p/aws-certified-solutions-architect-professional">Adrian Cantril</a> and <a target="_blank" href="https://www.udemy.com/course/aws-solutions-architect-professional/">Stephan Maarek’s</a> courses.</p>
<p><strong>Practice tests</strong></p>
<p>I used <a target="_blank" href="https://portal.tutorialsdojo.com/courses/aws-certified-solutions-architect-professional-practice-exams/">Tutorials dojo</a> practice exams and Acloudguru practice exams.</p>
<p>It was important to take my time with the practice exams. I worked through the problems multiple times. I reviewed the correct answers to understand them as well as know why the incorrect answer options are incorrect. Another thing that helped was learning how to quickly eliminate obviously incorrect answer options and deciding on the remaining options; which best answers the question.</p>
<p><strong>White papers</strong></p>
<p>I highly recommend reading the AWS whitepapers. <a target="_blank" href="https://aws.amazon.com/whitepapers">https://aws.amazon.com/whitepapers</a></p>
<h1 id="heading-time-management">Time management</h1>
<p>Managing my time for this exam was crucial. There were 75 questions that had to be completed within 180 minutes, so roughly 2 mins per question. The questions, as well as the answer options, can be very wordy. There were several questions along with their answer choices that I had to reread multiple times to understand the finer details. Spending more time than allotted for each question, I ran out of time with 7 questions left to complete.</p>
<h1 id="heading-exam-day">Exam day</h1>
<p>Took my exam online with PearsonVue. I checked in for my exam about 10 minutes earlier than scheduled. There were 12 people waiting in line. I waited an additional 30 mins before my proctor was available.</p>
<p>After the long exam, I took the closing survey and was anxious about my results. To further increase my anxiousness the PASS / FAIL preliminary evaluation did not appear. Only the standard notice that says AWS will evaluate results against compliance and policies and that the process can take up to 5 working days. This was unusual from my experience taking other AWS certifications with PearsonVue. At this point, I thought I failed.</p>
<p>Early the next morning, I received my digital badge from Credly and an email from certmetrics congratulating me on passing my SA Pro exams! I hope my experience will be valuable to you.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Lambda Function URL using Terraform -  quick walkthrough]]></title><description><![CDATA[How many times have you had to create an API Gateway resource on AWS just to get through to your Lambda Function? I love using the serverless stack (AWS Lambda + API Gateway + DynamoDB) for running small applications. The Lambda and APIGateway combo ...]]></description><link>https://nquayson.com/aws-lambda-function-url-using-terraform-quick-walkthrough</link><guid isPermaLink="true">https://nquayson.com/aws-lambda-function-url-using-terraform-quick-walkthrough</guid><category><![CDATA[AWS]]></category><category><![CDATA[APIs]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Nana Quayson]]></dc:creator><pubDate>Tue, 12 Apr 2022 16:52:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1649735173015/A5TLVkBL9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>How many times have you had to create an API Gateway resource on AWS just to get through to your Lambda Function? I love using the serverless stack (AWS Lambda + API Gateway + DynamoDB) for running small applications. The Lambda and APIGateway combo has served its purpose really well. Last week, <a target="_blank" href="https://aws.amazon.com/blogs/aws/announcing-aws-lambda-function-urls-built-in-https-endpoints-for-single-function-microservices/">AWS announced</a> Lambda Function URLs. A new Lambda feature that allows you to directly create an HTTPS endpoint for your Function.</p>
<p>Let's walk through how we can quickly set this up with terraform.</p>
<p>First, create the provider and terraform blocks in provider.tf.</p>
<pre><code class="lang-bash">touch provider.tf
</code></pre>
<pre><code class="lang-plaintext">terraform {
  required_providers {
    aws = {
      version = "&gt;= 4.9.0”
      source = "hashicorp/aws"
    }
  }
}
provider "aws" {
  profile = “your_cli_profile”
  region  = "us-east-1"
}
</code></pre>
<p>The most recent release of the <a target="_blank" href="https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.9.0">terraform-provider-aws</a> (v4.9.0) by Hashicorp has the lambda functions URLs functionality. Make sure to replace profile with your cli profile. If you don’t already have an AWS access profile setup refer to the <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html">AWS documentation</a> to set it up.</p>
<h1 id="heading-creating-the-lambda-url-resource">Creating the Lambda URL resource</h1>
<p>Now we will go ahead and create the lambda functions URL resource.</p>
<pre><code class="lang-bash">touch main.tf
</code></pre>
<pre><code class="lang-plaintext">resource "aws_lambda_function_url" "url1" {
  function_name      = aws_lambda_function.myfunc.function_name
  authorization_type = "NONE"

  cors {
    allow_credentials = true
    allow_origins     = ["*"]
    allow_methods     = ["*"]
    allow_headers     = ["date", "keep-alive"]
    expose_headers    = ["keep-alive", "date"]
    max_age           = 86400
  }
}
</code></pre>
<p>Note that the <code>authorization_type = 'NONE'</code> makes your URL publicly accessible. This might be fine for testing. In higher environments, however, this must be avoided. We set our familiar CORS values including <code>allow_methods</code> which can be set to <code>"GET", "POST", "DELETE"</code>.</p>
<p>Before we can run this, we need a full lambda function resource, <code>myfunc</code>.</p>
<pre><code class="lang-plaintext">resource "aws_lambda_function" "myfunc" {
  filename         = data.archive_file.zip.output_path
  source_code_hash = data.archive_file.zip.output_base64sha256
  function_name    = "myfunc"
  role             = aws_iam_role.iam_for_lambda.arn
  handler          = "func.handler"
  runtime          = "python3.8"

}

resource "aws_iam_role" "iam_for_lambda" {
  name = "iam_for_lambda"

  assume_role_policy = &lt;&lt;EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

data "archive_file" "zip" {
  type        = "zip"
  source_dir = "${path.module}/lambda/"
  output_path = "${path.module}/packedlambda.zip"
}
</code></pre>
<p>We will use Python for the handler function.</p>
<pre><code class="lang-bash">mkdir lambda &amp;&amp; touch lambda/func.py
</code></pre>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">handler</span>(<span class="hljs-params">event, context</span>):</span>
    body = <span class="hljs-string">"hello"</span>
    response = {
        <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">200</span>,
        <span class="hljs-string">"statusDescription"</span>: <span class="hljs-string">"200 OK"</span>,
        <span class="hljs-string">"isBase64Encoded"</span>: <span class="hljs-literal">False</span>,
        <span class="hljs-string">"headers"</span>: {<span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"text/json; charset=utf-8"</span>},
        <span class="hljs-string">"body"</span>: body
        }

    <span class="hljs-keyword">return</span> response
</code></pre>
<h1 id="heading-lets-run-terraform">Let's run terraform</h1>
<p>To run terraform we go through the workflow steps:</p>
<pre><code class="lang-plaintext">terraform init
terraform plan
terraform apply
</code></pre>
<p>Now let's go into the console to look at, and try out our new Lambda Function URL.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649726617397/v2m3jToQn.jpg" alt="Screen Shot 2022-04-11 at 8.07.34 PM.jpg" /></p>
<h1 id="heading-final-thoughts">Final thoughts</h1>
<p>Congrats! you just created a Lambda Function URL, a simple API to your serverless lambda function. 
Lambda Function URLs do not have native APIGateway capabilities such as rate limiting, throttling, IP whitelisting / blacklisting, authorizers. There has been a lot of talk about this in the wider developer community and concerns about how security best practices can be implemented.<br />On the plus side they:</p>
<ul>
<li><p>have <em>no added cost</em> on top of Lambda</p>
</li>
<li><p>are quicker to implement than API Gateway</p>
</li>
<li><p>can be paired with CloudFront to tap some of its inherent benefits such as WAF, Logging, geo targeting - targeting delivery to specific end-users.</p>
</li>
</ul>
<p>Here's the <a target="_blank" href="https://github.com/nquayson/terraform/tree/main/lambda_function_urls">github code</a> for the project.</p>
]]></content:encoded></item><item><title><![CDATA[Answers to your common DNS CNAME issues]]></title><description><![CDATA[DNS issues can get really frustrating. You have a new shiny website/webapp that you want served up to your visitors. You also have a new domain which must be pointed at the CDN (or Load Balancer) of your host. While googling, you find some nice instr...]]></description><link>https://nquayson.com/answers-to-your-common-dns-cname-issues</link><guid isPermaLink="true">https://nquayson.com/answers-to-your-common-dns-cname-issues</guid><category><![CDATA[dns]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Nana Quayson]]></dc:creator><pubDate>Thu, 17 Dec 2020 14:56:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1608092257451/FFj326YDw.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>DNS issues can get really frustrating. You have a new shiny website/webapp that you want served up to your visitors. You also have a new domain which must be pointed at the CDN (or Load Balancer) of your host. While googling, you find some nice instructions for setting the DNS records. So you head to your DNS configuration, where you successfully set the CNAME record to point to your target. Now all you need to do is wait for TTL; for your new records to get propagated right? You check every hour or so for the next 24 hours, but your website is still not serving up. Wait! There must be something wrong. Is the host server down? Well, probably not. It has 99.999999999% availability. </p>
<blockquote>
<p>If this already sounds like you, take a deep breath. Remember, we have all been there; grab a cup of coffee or tea, and let's learn some more about CNAMEs. </p>
</blockquote>
<h1 id="what-at-all-is-a-cname">What at all is a CNAME?</h1>
<p>In simple terms, a CNAME record points a name to another name. It is used, for instance, when you want to point visitors of your domain name <code>app.example.com</code> to another name such as <code>app.something.com</code>. Below you see how I pointed my blog url <code>blog.nquayson.com</code> at hashnode's network url. It is that simple.  </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608086466474/LizEIxNJR.png" alt="cname.PNG" /></p>
<h1 id="why-are-cname-records-so-common">Why are CNAME records so common?</h1>
<p>The rise in cloud computing is a factor here. A common way to point a domain to a CDN is to add the address as a CNAME. Also, many providers use CNAME records as a way to validate domain ownership. </p>
<h1 id="the-common-issues">The common issues:</h1>
<ul>
<li>CNAME record cannot be at the Apex/root domain level. </li>
<li>CNAME record cannot be defined with other record types (eg MX, TXT, A) for the same name. </li>
<li>MX record cannot point to CNAME</li>
<li>CNAME record CAN point to other CNAME records, however this practice is considered inefficient. </li>
</ul>
<h1 id="why-not-directly-point-at-ips-rather-than-names">Why not directly point at IPs rather than names?</h1>
<p>To ensure high availability, many modern hosting providers and CDNs generally run on distributed system architectures. When host node instances fail, or when there is a better server node available (based on factors such as latency, proximity, health, or cost) providers may have to route traffic to a different node. This will result in a change of destination IP. So your application will break, if you pointed your service directly to an IP.  </p>
<p>Some providers will allow creating one CNAME at the domain apex, because of CNAME flattening, but then that can cause other records such as TXT, MX to not work properly. 
Let's say your domain name is <code>www.example.com</code>
We refer to <code>example.com</code> as the 'apex' or 'naked' or 'root'. Some providers such as namecheap use the '@' symbol to represent the apex. The problem with the apex domain is that setting a CNAME record, breaks any other records associated with this name. For instance, you might want to use email service on your domain. However, by setting a CNAME for the apex, you will not be able to use email. </p>
<h1 id="are-there-any-workarounds">Are there any workarounds?</h1>
<p>Yes: CNAME flattening or simply using www. First, you may have heard of the ALIAS record type (not to be confused with the 'A' record which points name to IP address). </p>
<blockquote>
<p>The ALIAS record (also referred to as an ANAME) is not traditionally part of 
the DNS spec. They are virtual record types usually found with cloud DNS providers for their own internal use.  </p>
</blockquote>
<p>The way it works is, when the authoritative name server receives a request for a record which has an associated ALIAS record, it resolves the ALIAS first. Let's say the ALIAS points to another CNAME. It will then resolve that CNAME down to it's A records. 
They work pretty like CNAME record type but are a way to solve some of the many problems that are associated with the CNAME record. </p>
<h1 id="what-if-your-dns-provider-does-not-support-the-alias-record">What if your DNS provider does not support the ALIAS record</h1>
<p>Rather than set the CNAME to your website at the domain apex, you could set it on a subdomain such as www. Keeping www as your main URL or canonical domain frees up the apex, as well as other subdomains to be used for other services such as email. </p>
<p>Let me know in the comments below whether you have faced any CNAME issues in the past, and how you resolved it! </p>
]]></content:encoded></item><item><title><![CDATA[Wondering how 'hello world' started?]]></title><description><![CDATA[When people start learning anything new in programming, the first code they write is usually called... well 'hello world'. If you're like me, you've always wondered the origin of this phrase in programming.
That started somewhere around the 1970s, wh...]]></description><link>https://nquayson.com/wondering-how-hello-world-started</link><guid isPermaLink="true">https://nquayson.com/wondering-how-hello-world-started</guid><dc:creator><![CDATA[Nana Quayson]]></dc:creator><pubDate>Tue, 01 Sep 2020 22:53:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1608580706771/n1UQ-Fuxg.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When people start learning anything new in programming, the first code they write is usually called... well 'hello world'. If you're like me, you've always wondered the origin of this phrase in programming.</p>
<p>That started somewhere around the 1970s, when the C programming language was being developed at Bell Labs. The very first publication of it was a simple C program that output the string 'hello, world' and a newline character.</p>
<pre><code class="lang-plaintext">main( ) {
        printf("hello, world\n");
}
</code></pre>
<p>Today's hello world applications look a little bit different. There's no comma after the hello, and there is usually no need for a new line at the end of world.</p>
<pre><code class="lang-plaintext">def hello():
    print ("hello world")
</code></pre>
<p>For me personally, programming has been one of the most fascinating things I've ever learned. I learned programming in qBASIC from an old book I had chanced upon in my high school library. I found it very challenging and different. The sheer power of creativity that came along with it kept me motivated to go on and on; in the process exploring basic data structures and understanding really old algorithms like the bubble sort and insertion sort. I have gone on and on to write at least a dozen hello world applications in different programming languages and platforms. Needless to say it creates a good starting point for writing much more complex solutions. Cheers to learning! Connect with me on <a target="_blank" href="https://github.com/nquayson">Github</a>.</p>
]]></content:encoded></item><item><title><![CDATA[The cloud resume challenge was worth its weight in gold!]]></title><description><![CDATA[It has been one exciting project in which I have learned to put together several AWS Services and skills that I picked up along my AWS SAA certification journey. 

Around the beginning of this month, a good friend introduced me to, AWS Serverless Her...]]></description><link>https://nquayson.com/the-cloud-resume-challenge-was-worth-its-weight-in-gold</link><guid isPermaLink="true">https://nquayson.com/the-cloud-resume-challenge-was-worth-its-weight-in-gold</guid><category><![CDATA[serverless]]></category><category><![CDATA[Python]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Nana Quayson]]></dc:creator><pubDate>Mon, 13 Jul 2020 23:30:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1635014770678/opxLTJdC4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>It has been one exciting project in which I have learned to put together several AWS Services and skills that I picked up along my AWS SAA certification journey. </p>
</blockquote>
<p>Around the beginning of this month, a good friend introduced me to, AWS Serverless Hero, Forrest Brazeal's <a target="_blank" href="https://cloudresumechallenge.dev/instructions/">cloud resume challenge</a>. It challenges participants to build a resume website using AWS cloud infrastructure; building and configuring services using cloud IaC automation in a CICD Pipeline. </p>
<p>Found it at a time that I had newly passed the AWS Solutions Architect – Associate exam, and it presented a great opportunity to test myself with a practical problem, have fun practicing with it, and showcase my skills.  </p>
<p>Prior to this project, I had had some experience with relational database instances running on-prem/server and working on some few small python projects. But I had never fully performed a CICD Pipeline implementation for a lot of the cloud services that were required for this challenge. The more I dived deep, the more my interest heightened. I, perhaps, spent a lot more time reading (because of my curiosity) than actually working on the project. </p>
<p>I accepted the challenge! In a nutshell, the steps that were taken to complete the challenge, in no specific order, involved: creating the frontend and then hosting it in S3, Domain registration and DNS configuration, AWS SAM and CloudFormation for building the serverless backend stack; which was comprised of AWS API Gateway + Lambda (Python) + DynamoDB, then finally, continuous integration and deployment using Github Actions. My codebase was stored on Github and I incorporated Unit Tests into my CICD pipeline. I started straight out taking the bull by the horns, from the backend, using a small dummy html file.</p>
<h1 id="heading-backend">Backend</h1>
<p>I built a small Linux (Ubuntu) box at home exclusively for the project. I installed and setup Python, PIP, Boto3, AWS CLI, SAM CLI, Docker, DynamoDB Local. I did this to provide me with the flexibility to try out features locally, before meddling with resources in the cloud. 
First, I experimented with basic CRUD in DynamoDB Local using Python and Boto3. This was my first real experience with a NoSQL DB. It was fairly easy to pick up. I also experimented with provisioning resources in the cloud from my local linux CLI. I ended up creating multiple AWS SAM templates for my backend infrastructure which, again, comprised of API Gateway, Lambda Function Event, and DynamoDB. Running from the AWS CLI and AWS SAM CLI, I succeeded with deploying my backend resources. Next task was to automate provisioning these same resources in a CICD Pipeline. </p>
<h1 id="heading-scm-automation-and-cicd-pipeline">SCM, Automation and CICD Pipeline</h1>
<p>I did some extensive reading and comparisons around Jenkins, AWS CodePipeline, CircleCI and Github Actions. AWS CodePipeline looked like the easiest for this task, but I wanted to try something new. 
I settled on GitHub actions, which was released a couple of months ago. My Actions workflow run in a Linux Docker container, mostly executing bash scripts. Environment variables such as access keys were securely set in Github Secrets. 
For the frontend repository, on each push of my local code to the master branch of my GitHub repository, the actions CICD is triggered, the workflow is executed, and the code base is automatically synced with the hosting S3 bucket. Then the CloudFront CDN cache is invalidated. 
The workflow for the backend CICD pipeline involved Setting up the job, Checkout, Installing SAM CLI, Unit Testing, SAM Build, Deploy, and Invalidate Cache. SAM template Code is only built and deployed after the Python Lambda function passed the Unit tests. </p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/i/7q7u8v59fh563ardmchp.PNG" alt="Alt Text" /></p>
<h1 id="heading-frontend">Frontend</h1>
<p>At this point, I was confident I had completed a bulk of the tasks required. I have basic HTML and CSS knowledge and have not built an entire website all by myself. I usually find it comfortable working in the backend. It was challenging to find a website template that I liked exactly. So, I completely stripped apart a minimal html blog template which I had found on HUGO; modified it to look like a resume (I spent lots of hours trying to make it look elegant ++insert smiling face++). I did a simple JS XHR GET request to the API to get the visitor count. This step helped brush up my rusty CSS skills and reminded me to appreciate design work, even more. </p>
<h1 id="heading-dns-domain-hosting-cdn-ssl-certificate">DNS, Domain, Hosting, CDN, SSL Certificate</h1>
<p>I realize people like to keep things within the AWS ecosystem for simplicity, but it was not too hard to apply my DNS knowledge to hook my CloudFront distribution with an external domain registrar. 
There were a few downsides with AWS Certificate Manager during DNS validation using the CName record, but I was able to get it fixed by googling. I also had to switch my AWS region to us-east-1 to use ACM according to AWS FAQ. </p>
<h1 id="heading-final-words">Final Words</h1>
<p>This project presented a refreshing challenge, motivating me to do a lot of reading about Docker and the AWS CLI. I have learnt CICD automation for cloud infrastructure and how to include unit testing into the workflow Pipeline. I learned to integrate a lot of the services, which I had previously not used together. I discovered portals where I can do further reading when I run into problems with AWS services. Big thanks to Forrest Brazeal for allowing me the opportunity to take part in this challenge. 
Here is the <a target="_blank" href="https://cloudresume.nquayson.com">end product</a>. And here are <a target="_blank" href="https://github.com/nquayson/aws-solutions-architect-associate-notes">github notes</a> I co-authored after my studies for AWS Solutions Architect – Associate exam. </p>
]]></content:encoded></item><item><title><![CDATA[An Automated Slope Monitoring System for a Major Mine in Sub-Saharan Africa]]></title><description><![CDATA[A few years back, I was glad to be a part of a team that implemented a monitoring solution for a Gold production giant in Ghana to address some pertinent environmental concerns. The solution had to make continuous 24-hour measurement observations to ...]]></description><link>https://nquayson.com/an-automated-slope-monitoring-system-for-a-major-mine-in-sub-saharan-africa</link><guid isPermaLink="true">https://nquayson.com/an-automated-slope-monitoring-system-for-a-major-mine-in-sub-saharan-africa</guid><category><![CDATA[System Architecture]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Nana Quayson]]></dc:creator><pubDate>Wed, 04 Dec 2019 05:49:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1634689953115/dBlxMiANB.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few years back, I was glad to be a part of a team that implemented a monitoring solution for a Gold production giant in Ghana to address some pertinent environmental concerns. The solution had to make continuous 24-hour measurement observations to the open pit mine walls, analyze and detect the slightest of movements which are then used to determine stability and predict wall failures in advance; preventing disasters, saving lives and protecting equipment. Being the first of its kind that we had undertaken in our capacity as partners of Leica Geosystems, it was a very rewarding and noteworthy but also challenging experience. </p>
<p><strong>The Mine Operations</strong>  </p>
<p>The mine was one of Newmont's operations in the Sub-Saharan region. The pit lies within the Sefwi Volcanic Belt, one of Ghana’s largest volcanic belts; an open-cast surface mine measuring approximately 2100 x 450 x 120 m. Traditionally, land surveyors had used manual total stations to measure distances and bearings from the mine to hundreds of fixed-prism targets installed in the walls of the open pit. At the time of writing there were about 300 prisms installed in the mine pit and more prisms  added weekly, to cover all slopes in the pit walls. These readings were taken daily, usually multiple sets, then submitted to the geotechnical engineers to conduct their analyses. Geotechnical engineers will usually receive thousands of x,y,z coordinate pairs or distance-bearing polar readings. This manual way of performing monitoring changed after the new system was implemented. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1634787766181/IpslgKWa6U.png" alt="newmont_arch.png" /></p>
<p><strong>Overview of the system </strong>  </p>
<p>The system installed consists of an optical robot (Leica NOVA MS60 MultiStation) - which automatically learns and makes measurements to the carefully distributed targets, DTM meteo sensor and a Netmodule Industrial Router all getting power from a 12V Solar Panel Power supply. Leica GeoMos Monitor and Leica GeoMos Analyser software installed in VMs, for analyses and data aggregation. 
The GNSS receiver installed at the station receives differential GPS observations to check relative movements at the monitoring base. The Netmodule Industrial Router handles communication and serves as a WLAN client and a conduit for data flow between the MS60, GeoMOS and the Meteo Sensor. Geomos connects to the MS60 via a Wi-Fi network, collects data from the sensors and stores them in a SQL database. For high availability, there is also a secondary radio network that the system automatically fails over to when anything goes wrong with the primary network. </p>
<p>The DTM Meteo Sensor is installed close to the monitoring station to measure atmospheric variations in temperature and pressure. These data are used to correct the measured slope distances by the MS60 sensor.  </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1634689499225/U-KLOr-FS.jpeg" alt="1520199266367.jpeg" /></p>
<p>The Ms60 was chosen over other monitoring sensors because it had the added capability to make reflectorless 3D scans and had made a reputation as the world’s first self-learning MultiStation, automatically and continuously adapting to any environment, despite the challenges. Data from the MS60 sensor is a combination of scans of sections of the mine wall as well as measurements to installed prisms in the mine wall. This data is analyzed and organized by GeoMOS and stored in the database. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1634689472916/i_ExF1tUp.jpeg" alt="1520192703356.jpeg" /></p>
<p>With this monitoring system installed, it is possible for geotechnical engineers to perform 24-hour continuous slope stability analyses. The data from the system makes it possible for slope failures to be predicted in advance. What are your thoughts?</p>
<p>UPDATE 2017:  <a target="_blank" href="https://cloudresume.nquayson.com/01_Keeping-a-vigilant-Eye.pdf">A more detailed account of this project appeared in Leica Geosystem's magazine Reporter 76.</a> </p>
]]></content:encoded></item></channel></rss>