<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[CloudOps with Lajah]]></title><description><![CDATA[This Publication contains the guides and contents regarding Cloud technology and its use cases.]]></description><link>https://lajahshrestha.com.np</link><generator>RSS for Node</generator><lastBuildDate>Sun, 19 Apr 2026 01:40:31 GMT</lastBuildDate><atom:link href="https://lajahshrestha.com.np/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[AWS CodePipeline Alerts to Google Chat for Real-Time Notifications]]></title><description><![CDATA[In this guide, you'll learn how to integrate AWS CodePipeline notifications into Google Chat using CloudWatch, SNS, and a Lambda function. This solution provides real-time updates to your Google Chat space when a pipeline fails.

Steps Overview:

Cre...]]></description><link>https://lajahshrestha.com.np/aws-codepipeline-alerts-to-google-chat-for-real-time-notifications</link><guid isPermaLink="true">https://lajahshrestha.com.np/aws-codepipeline-alerts-to-google-chat-for-real-time-notifications</guid><category><![CDATA[AWS]]></category><category><![CDATA[CodePipeline]]></category><category><![CDATA[notifications]]></category><category><![CDATA[cloud operations]]></category><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Lajah Shrestha]]></dc:creator><pubDate>Mon, 30 Dec 2024 01:27:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521930237/e0c96f72-e254-4a54-8d0d-4dce93206794.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this guide, you'll learn how to integrate AWS CodePipeline notifications into Google Chat using CloudWatch, SNS, and a Lambda function. This solution provides real-time updates to your Google Chat space when a pipeline fails.</p>
<hr />
<h2 id="heading-steps-overview"><strong>Steps Overview:</strong></h2>
<ol>
<li><p>Create a Google Chat Space and Webhook URL.</p>
</li>
<li><p>Configure CloudWatch Events to monitor pipeline failures.</p>
</li>
<li><p>Set up an SNS topic to handle notifications.</p>
</li>
<li><p>Process notifications with a Lambda function and send them to the Google Chat channel.</p>
</li>
</ol>
<hr />
<h2 id="heading-step-1-create-a-google-chat-space-and-webhook-url"><strong>Step 1: Create a Google Chat Space and Webhook URL</strong></h2>
<ol>
<li><p><strong>Create a Google Chat Space</strong></p>
<p> In Google Chat, create a space where you want the pipeline notifications to appear</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521071296/387da493-f1ab-4ac9-a97f-5d3a73c93ede.png" alt class="image--center mx-auto" /></p>
<p> .</p>
</li>
<li><p><strong>Set Up Webhook Integration</strong></p>
<ul>
<li><p>Navigate to the space, click the dropdown menu, and go to <strong>Apps &amp; Integrations</strong>.</p>
</li>
<li><p>Select <strong>Manage Webhooks</strong> to generate a webhook URL.<strong>Note:</strong> Webhooks can only be created in Workspace accounts (not personal Google accounts), and the</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521081890/a4d4e02f-7c66-49ab-a88d-2781c8bd7b31.png" alt class="image--center mx-auto" /></p>
<p>  administrator must enable this feature.</p>
</li>
</ul>
</li>
<li><p><strong>Save the Webhook URL</strong></p>
<p> Save the generated webhook URL. You'll use it later in the Lambda function.</p>
</li>
</ol>
<hr />
<h2 id="heading-step-2-configure-cloudwatch-to-monitor-pipeline-failures"><strong>Step 2: Configure CloudWatch to Monitor Pipeline Failures</strong></h2>
<p>AWS CodePipeline provides basic notifications for Slack, but for custom solutions like Google Chat, we’ll use CloudWatch.</p>
<ol>
<li><p><strong>Create a CloudWatch Event Rule</strong></p>
<ul>
<li><p>Go to <strong>CloudWatch &gt; Buses &gt; Rules &gt; Create Rule</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521149177/164ace4a-1cd9-45f6-a82b-5ed228c9cde0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Configure the rule to listen for CodePipeline execution state changes:</p>
<ul>
<li><p>Event Source: <strong>AWS CodePipeline</strong></p>
</li>
<li><p>Event Type: <strong>CodePipeline Action Execution State Change</strong></p>
</li>
<li><p>Specific State: <strong>FAILE</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521157866/d12a0508-54aa-425a-95e0-d8eddcf1e506.png" alt class="image--center mx-auto" /></p>
<p>  <strong>D</strong></p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Set the Target as an SNS Topic</strong></p>
<ul>
<li><p>Under the Target section, create an SNS topic.</p>
</li>
<li><p>Allow CloudWatch to send events to this SNS topic.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-step-3-process-notifications-with-a-lambda-function"><strong>Step 3: Process Notifications with a Lambda Function</strong></h2>
<p>The Lambda function will process the SNS notification and send the formatted message to Google Chat.</p>
<ol>
<li><p><strong>Create a Lambda Function</strong></p>
<ul>
<li><p>Go to the AWS Lambda Console and create a new function named <code>FnPipelineNotificationProcessor</code>.</p>
</li>
<li><p>Select <strong>Python</strong> as the runtime.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521172998/2bc30b88-0cbb-40cd-98d4-254a532e8613.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Deploy the Code</strong></p>
<ul>
<li><p>Clone the repository from:<a target="_blank" href="https://github.com/LajahShrestha/AWSPipelineNotificationProcessor">https://github.com/LajahShrestha/AWSPipelineNotificationProcessor</a></p>
</li>
<li><p>Modify the <code>lambda_function.py</code> file:</p>
<ul>
<li>Replace the <code>url</code> value (line 7) with the Google Chat webhook URL you saved earlier.</li>
</ul>
</li>
<li><p>Zip the code and upload it to an S3 bucket.</p>
</li>
</ul>
</li>
<li><p><strong>Upload Code to Lambda</strong></p>
<ul>
<li><p>In the Lambda console, upload the zipped file from S3.</p>
</li>
<li><p>Increase the timeout (under the <strong>Configuration</strong> tab) to around 10 minutes.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521218861/834bed8f-a770-4d73-b1ee-fca2ac2e8d64.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Add SNS as a Trigger</strong></p>
<ul>
<li><p>Add the SNS topic you created as a trigger for the Lambda function.</p>
</li>
<li><p>Ensure the Lambda execution role has sufficient permissions to read from SNS</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521222529/ac84d957-121c-428c-8d58-73577d25311e.png" alt class="image--center mx-auto" /></p>
<p>  .</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-step-4-testing-the-integration"><strong>Step 4: Testing the Integration</strong></h2>
<ol>
<li><p><strong>Trigger a Pipeline Failure</strong></p>
<ul>
<li><p>Create a dummy pipeline or use an existing one.</p>
</li>
<li><p>Induce a failure to test the notification flow.</p>
</li>
</ul>
</li>
<li><p><strong>Verify Notifications</strong></p>
<ul>
<li><p>Check your Google Chat space for pipeline failure notifications.</p>
</li>
<li><p>If no notifications appear, review the Lambda function logs in CloudWatch.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735521227381/cc4765f1-4cb4-4a2c-aa16-fa86959c60b2.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-debugging-tips"><strong>Debugging Tips</strong></h2>
<ul>
<li><p><strong>Webhook Issues:</strong> Ensure the Google Chat webhook URL is valid and accessible.</p>
</li>
<li><p><strong>Lambda Logs:</strong> Check CloudWatch logs for any processing errors.</p>
</li>
<li><p><strong>Permissions:</strong> Verify the Lambda execution role has permissions to subscribe to and process SNS messages.</p>
</li>
<li><p><strong>Event Rule:</strong> Ensure the CloudWatch event rule is correctly configured to capture the desired pipeline state changes.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>With this setup, you can efficiently monitor AWS CodePipeline failures in Google Chat, ensuring prompt action when issues arise. This integration leverages the flexibility of CloudWatch, SNS, and Lambda to provide real-time notifications.</p>
<p>For further reference, check out this <a target="_blank" href="https://medium.com/@sabinkrshrestha/aws-codepipeline-and-codebuild-notification-in-google-chat-3e26a18ab78">detailed tutorial on Medium</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Centralizing VPC Flow Logs from AWS Accounts in an Organization Managed by Control Tower to a Single S3 Bucket in Log Archive Account]]></title><description><![CDATA[VPC Flow logs are very cruical part of monitoring. VPC flow logs captures information about the IP traffic going to and from network interfaces within a Virtual Private Cloud (VPC). They provide detailed insights into network activity, such as identi...]]></description><link>https://lajahshrestha.com.np/centralizing-vpc-flow-logs-from-aws-accounts-in-an-organization-managed-by-control-tower-to-a-single-s3-bucket-in-log-archive-account</link><guid isPermaLink="true">https://lajahshrestha.com.np/centralizing-vpc-flow-logs-from-aws-accounts-in-an-organization-managed-by-control-tower-to-a-single-s3-bucket-in-log-archive-account</guid><category><![CDATA[#cloudOps #VPC #flowlogs]]></category><category><![CDATA[@CloudOps Community]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS Control Tower]]></category><dc:creator><![CDATA[Lajah Shrestha]]></dc:creator><pubDate>Sun, 01 Dec 2024 18:15:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736179099310/57748f63-aa64-4b9e-9cdb-2aaaa41440a4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>VPC Flow logs are very cruical part of monitoring. VPC flow logs captures information about the IP traffic going to and from network interfaces within a Virtual Private Cloud (VPC). They provide detailed insights into network activity, such as identifying unusual traffic patterns, diagnosing connectivity issues, and monitoring for security threats. By analyzing VPC Flow Logs, organizations can gain valuable data insights for troubleshooting and improving network performance. These logs can be integrated into monitoring and analytics tools like Splunk or the ELK Stack (Elasticsearch, Logstash, and Kibana), enabling centralized log management, real-time visualization, and advanced threat detection across network traffic.</p>
<p>In this blog we’ll be looking into how we can collect VPC flow logs from multiple accounts into a single S3 bucket. This use case is needed in an AWS Organizations having multiple AWS Accounts enabling a single S3 bucket as a source of logs for creating visual dashboards out of the data.</p>
<h1 id="heading-prerequisitenot-mandatory">Prerequisite(Not mandatory):</h1>
<ol>
<li><p>AWS Organization setup</p>
</li>
<li><p>Log Archive Account setup by AWS Control tower</p>
</li>
<li><p>VPCs</p>
</li>
</ol>
<h1 id="heading-steps-to-follow">Steps to follow:</h1>
<ol>
<li><p>Create a central S3 Bucket(sink) in Log Archive Account</p>
</li>
<li><p>Update the Destination Bucket Policy.</p>
</li>
<li><p>Create VPC</p>
</li>
<li><p>Enable VPC Flow logs &amp; select the logs format</p>
</li>
<li><p>Choose central Bucket as Destination.</p>
</li>
<li><p>Create a central S3 Bucket(sink) in Log Archive Account</p>
</li>
</ol>
<h2 id="heading-create-a-central-s3-bucketsink-in-log-archive-account">Create a central S3 Bucket(sink) in Log Archive Account</h2>
<p>While managing AWS organization from AWS Control tower, it creates a Log Archive account which serves the purpose of storing different kinds of logs across the organization. We’ll also be centralizing the logs into a self created S3 bucket inside this account.</p>
<ul>
<li><p>Create an S3 bucket from the Management Console.</p>
</li>
<li><p>After creating the S3 bucket, the bucket must be allowed to receive the logs files written into it from multiple sources. The sources in our use case are from VPC of all the aws accounts across the organization.</p>
</li>
</ul>
<h2 id="heading-bucket-policy-for-centralized-flow-log">Bucket Policy for Centralized Flow log</h2>
<p>Update the bucket Policy as the following:</p>
<pre><code class="lang-jsx">{
    <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-string">"Statement"</span>: [
        {
            <span class="hljs-string">"Sid"</span>: <span class="hljs-string">"AWSLogDeliveryWrite"</span>,
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Principal"</span>: {
                <span class="hljs-string">"Service"</span>: <span class="hljs-string">"delivery.logs.amazonaws.com"</span>
            },
            <span class="hljs-string">"Action"</span>: <span class="hljs-string">"s3:PutObject"</span>,
            <span class="hljs-string">"Resource"</span>: [
                <span class="hljs-string">"arn:aws:s3:::DESTINATION_BUCKET_NAME"</span>,
                <span class="hljs-string">"arn:aws:s3:::DESTINATION_BUCKET_NAME/*"</span>
            ],
            <span class="hljs-string">"Condition"</span>: {
                <span class="hljs-string">"StringEquals"</span>: {
                    <span class="hljs-string">"s3:x-amz-acl"</span>: <span class="hljs-string">"bucket-owner-full-control"</span>,
                    <span class="hljs-string">"aws:SourceAccount"</span>: [
                        <span class="hljs-string">"SOURCE_ACCOUNT_NUMBER_1"</span>,
                        <span class="hljs-string">"SOURCE_ACCOUNT_NUMBER_2"</span>,
                        <span class="hljs-string">"SOURCE_ACCOUNT_NUMBER_3"</span>
                    ]
                },
                <span class="hljs-string">"ArnLike"</span>: {
                    <span class="hljs-string">"aws:SourceArn"</span>: [
                        <span class="hljs-string">"arn:aws:logs:REGION:SOURCE_ACCOUNT_NUMBER_1:*"</span>,
                        <span class="hljs-string">"arn:aws:logs:REGION:SOURCE_ACCOUNT_NUMBER_2:*"</span>,
                        <span class="hljs-string">"arn:aws:logs:REGION:SOURCE_ACCOUNT_NUMBER_3:*"</span>
                    ]
                }
            }
        },
        {
            <span class="hljs-string">"Sid"</span>: <span class="hljs-string">"AWSLogDeliveryCheck"</span>,
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Principal"</span>: {
                <span class="hljs-string">"Service"</span>: <span class="hljs-string">"delivery.logs.amazonaws.com"</span>
            },
            <span class="hljs-string">"Action"</span>: [
                <span class="hljs-string">"s3:GetBucketAcl"</span>,
                <span class="hljs-string">"s3:ListBucket"</span>
            ],
            <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::DESTINATION_BUCKET_NAME"</span>,
            <span class="hljs-string">"Condition"</span>: {
                <span class="hljs-string">"StringEquals"</span>: {
                    <span class="hljs-string">"aws:SourceAccount"</span>: [
                        <span class="hljs-string">"SOURCE_ACCOUNT_NUMBER_1"</span>,
                        <span class="hljs-string">"SOURCE_ACCOUNT_NUMBER_2"</span>,
                        <span class="hljs-string">"SOURCE_ACCOUNT_NUMBER_3"</span>
                    ]
                },
                <span class="hljs-string">"ArnLike"</span>: {
                    <span class="hljs-string">"aws:SourceArn"</span>: [
                        <span class="hljs-string">"arn:aws:logs:REGION:SOURCE_ACCOUNT_NUMBER_1:*"</span>,
                        <span class="hljs-string">"arn:aws:logs:REGION:SOURCE_ACCOUNT_NUMBER_2:*"</span>,
                        <span class="hljs-string">"arn:aws:logs:REGION:SOURCE_ACCOUNT_NUMBER_3:*"</span>
                    ]
                }
            }
        }
    ]
}
</code></pre>
<p>Here DESTINATION_BUCKET_NAME is the name of S3 we just creatd above. You may add more aws accounts as per need in the same format and replace the placeholders like "SOURCE_ACCOUNT_NUMBER_1" and "arn:aws:logs:REGION:SOURCE_ACCOUNT_NUMBER_1:*"</p>
<h2 id="heading-enable-vpc-flow-logs-amp-select-the-logs-format">Enable VPC Flow logs &amp; select the logs format</h2>
<p>Create a VPC if not exists already. To enable VPC flow log, select the desired VPC and navigate to the flow logs tab under the VPC and select Create flow log.</p>
<p>Provide a desired name and select “ Send to an Amazon S3 bucket” as Destination and place the sink bucket arn(created in earlier step). Choose the desired format and select create</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736179233494/fa365997-974a-46e5-ad83-32a261732c85.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736179237734/36c32a4b-c301-495e-80e0-d06a86d3c7d7.png" alt class="image--center mx-auto" /></p>
<p>After sometime you can see the logs getting populated in the S3:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736179241025/f4ec41e4-a4db-41c1-86fc-76841e53d75b.png" alt class="image--center mx-auto" /></p>
<p>The logs are stored in the following format: Bucket_name/AWSLogs/ACCOUNT_NUMBER/vpcflowlogs/region/year/month/day/logfilename.log.gz</p>
<p>You can follow the same step of enabling vpc flow logs for all the required accounts and the logs will get centralized in one single bucket across the whole organization.</p>
<h2 id="heading-reference">Reference:</h2>
<p><a target="_blank" href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-vpc-flow-logs-for-centralization-across-aws-accounts.html">https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-vpc-flow-logs-for-centralization-across-aws-accounts.html</a></p>
]]></content:encoded></item><item><title><![CDATA[Accelerating Software Delivery with AWS.]]></title><description><![CDATA[Accelerating speed in deployment is essential in today's competitive landscape, where user expectations for seamless and responsive applications are higher than ever. Rapid deployment allows organizations to quickly deliver new features, fix bugs, an...]]></description><link>https://lajahshrestha.com.np/accelerating-software-delivery-with-aws</link><guid isPermaLink="true">https://lajahshrestha.com.np/accelerating-software-delivery-with-aws</guid><dc:creator><![CDATA[Lajah Shrestha]]></dc:creator><pubDate>Sun, 17 Nov 2024 09:34:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731835865554/2b4cf580-21b7-4f77-ba3d-ab2fa604f421.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Accelerating speed in deployment is essential in today's competitive landscape, where user expectations for seamless and responsive applications are higher than ever. Rapid deployment allows organizations to quickly deliver new features, fix bugs, and respond to market changes, thus enhancing user satisfaction and engagement. This is where DevOps engineers play a crucial role in the software development lifecycle.</p>
<p>In this blog, I narrate the story of Shuri and her journey of application development, transitioning from a local setup to a cloud-based solution at an industry scale, all while incorporating characters from the Marvel Cinematic Universe to add an engaging twist.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834799328/0f93b02e-5180-4546-9f98-8dcf2b74b714.jpeg" alt class="image--center mx-auto" /></p>
<h1 id="heading-the-story">The Story:</h1>
<p>Our story begins with our first Character Ms. Shuri. You guys must recognize her from the Marvel Cinematic Universe. But since we’re in the multiverse now, this is not MCU, This IS ACU, and you guys know what ACU mean. AWS Cinematic Universe. So Shuri is a college student and a part time developer at a startup company in her nation who happens to be a very creative problem solver.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834822250/989f1a05-d77c-4c18-830a-2dd7fdfc521f.jpeg" alt class="image--center mx-auto" /></p>
<p>Being a full time student, She needs to take notes for multiple subjects and keep track of assignments.Shuri struggles to manage and track the enormous amount of documents that need to be submitted as she needs to manage her professional job as well. So She comes up with an idea: Anote: A digital notebook application for taking notes and managing documents.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834833223/3978792d-33ac-434d-ba52-f580ba6a0975.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Phase 1: Local Development</strong></p>
<p>Shuri develops the application locally on her own system. She uses Python with Flask for the backend, React for the frontend, and SQLite for the database.</p>
<p>Initial Features:</p>
<ul>
<li><p>Create, edit, and delete notes</p>
</li>
<li><p>Share notes via unique links</p>
</li>
<li><p>User authentication with basic username and password</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834840140/16be89be-1953-4371-b5ab-a31c33c556da.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Phase 2: Initial Deployment</strong></p>
<p>Shuri realizes she can't use the same device every time and needs to access her notes from multiple devices.</p>
<p>To make the contents accessible from the web, she decides to host the application.</p>
<ul>
<li><p>She deploys it on her own using an EC2 instance with AWS's free tier service.</p>
</li>
<li><p>She replaces SQLite with Amazon RDS for improved database management.</p>
</li>
</ul>
<p><strong>Phase 3: Growing User Base</strong></p>
<ul>
<li><p>Shuri shares her application with college students, realizing she needs to create a login functionality.</p>
</li>
<li><p>The application proves helpful, but it frequently goes down. Upon examining the nginx logs, Shuri notices that higher request volumes cause the server to crash, indicating insufficient server capacity.</p>
</li>
<li><p>To address the curent issue, Given the small but growing user base and the server capacity issues, Shuri vertically scales the application. She upgrades the server from 2 GB to 8 GB of RAM, accommodating the fourfold increase in users.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834848933/339e4d5b-1d26-4f1a-80a9-14b9e311c424.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834853013/57047214-648b-40d3-a987-fba5058a0f66.jpeg" alt class="image--center mx-auto" /></p>
<p>Phase 4: CI/CD</p>
<ul>
<li>As the application gained popularity among students and attracted investors' attention, the user base expanded rapidly.</li>
</ul>
<p>With this growth, data storage became a significant challenge. Shuri switched from EBS volumes to Amazon S3, a serverless object-level storage service. As demand increased, the need for continuous improvement in reliability and user-friendliness became apparent. Shuri would make changes locally, test them, and push the code to the repository.</p>
<p>New team members, including front-end developers, were brought on board as the application needed full upgrade.</p>
<p>The development process involved pulling the latest code from the repository and running build and run commands to reflect changes—even for minor updates like adjusting font sizes. This manual and repetitive process proved time-consuming and inefficient.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834868876/ecd14492-96ea-4c32-a6a1-8c50aee7d862.jpeg" alt class="image--center mx-auto" /></p>
<p>This demanded the need for automation to avoid time consuming manual and repetitive task. This is where our next character <strong>Peter</strong> comes into the story:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834873886/6a7d618d-2460-43c0-a15c-bf7a950dce5e.jpeg" alt class="image--center mx-auto" /></p>
<p>Automation: Peter addressed this challenge by implementing a CI/CD pipeline. This pipeline automates the repetitive build and deploy process, triggering automatically when code is pushed to the repository. (CodePipeline/CodeBuild)</p>
<p>With this automation in place, Shuri can now focus on adding new functionality without worrying about server deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834880756/a93a23c5-29ef-4098-9767-fd5df74ac33c.jpeg" alt class="image--center mx-auto" /></p>
<p>Phase 5: Need for a Standardized Deployment Process</p>
<p>Shuri's passion for her product and the positive user feedback drove her to continuously add features and address reported issues. However, a problem emerged: while the changes worked perfectly on her local machine, deploying them to the live server often caused the application to crash, resulting in downtime. This highlighted the need for a more robust deployment strategy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834887239/4429da5b-7d64-482e-ad77-28767954cbe8.jpeg" alt class="image--center mx-auto" /></p>
<p>2 major pre deployment procedure seems to be missing in the whole process.</p>
<p>Two key issues emerged:</p>
<p><strong>Issue 1:</strong></p>
<p>Inconsistency between development and live environments.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834894714/1aa3b8ba-4107-4c63-83dc-f993d65dee29.jpeg" alt class="image--center mx-auto" /></p>
<p>Solution By peter: Containerization</p>
<p>So in order to maintain the consistency the application was containerized using Docker.</p>
<p>Docker is a platform for developing, shipping, and running applications in containers. Containers are lightweight, portable, and self-sufficient units that can run consistently across different environments. Docker solves several key problems:</p>
<ul>
<li><p>Environment consistency: It ensures that the application runs the same way in development, testing, and production environments.</p>
</li>
<li><p>Isolation: Containers isolate applications from each other and from the underlying system, reducing conflicts and improving security.</p>
</li>
<li><p>Efficiency: Docker containers are more lightweight and use fewer resources compared to traditional virtual machines.</p>
</li>
<li><p>Portability: Containerized applications can easily be moved between different systems and cloud platforms.</p>
</li>
</ul>
<p>By using Docker, Shuri and her team can package the application and its dependencies into a container, ensuring that it behaves consistently across all environments and simplifying the deployment process.</p>
<p><strong>Issue 2:</strong></p>
<ol>
<li><p>Lack of thorough testing and validation. Solution: Set up development, UAT, and production environments.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834912031/b42fd374-8985-41ae-9b64-6ac6e49d65f6.jpeg" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>Peter, our DevOps hero, steps in. He establishes three crucial stages: Development → Staging → Production.</p>
<p>Peter implements a three-stage deployment process:</p>
<ul>
<li><p><strong>Development:</strong> Where new features are built and initially tested</p>
</li>
<li><p><strong>Staging (UAT):</strong> A mirror of production for final testing and validation</p>
</li>
<li><p><strong>Production:</strong> The live environment accessed by users</p>
</li>
</ul>
<p>This approach ensures thorough testing and reduces the risk of issues in the live environment, improving overall application stability and user experience.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834920674/488ab871-4d83-4a15-af25-dc3030661a85.jpeg" alt class="image--center mx-auto" /></p>
<p>And also to tackle the manual repetitive job of replicating the environment twice, Peter had created a CloudFormation Script as infrastructure as a code tool.</p>
<p>Infrastructure as Code (IaaC) is a practice of managing and provisioning computing infrastructure through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. Here's how it helps Peter:</p>
<ul>
<li><p>Consistency: IaaC ensures that the same environment is reproduced every time, eliminating discrepancies between development, staging, and production.</p>
</li>
<li><p>Automation: Peter can automatically create and manage multiple environments without manual intervention, saving time and reducing human errors.</p>
</li>
<li><p>Version Control: Infrastructure configurations can be versioned, allowing Peter to track changes and roll back if needed.</p>
</li>
<li><p>Scalability: As Shuri's application grows, Peter can easily scale the infrastructure by modifying the CloudFormation script.</p>
</li>
<li><p>Documentation: The CloudFormation script serves as living documentation of the infrastructure, making it easier for team members to understand the setup.</p>
</li>
</ul>
<p>By using CloudFormation as an IaaC tool, Peter significantly streamlines the process of creating and managing multiple environments, ensuring consistency and reducing the time and effort required for infrastructure management.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834936951/f34831d9-6450-4b47-8858-12b28e482fe0.jpeg" alt class="image--center mx-auto" /></p>
<p>Billing Hazard: Replicating environments and using more resources led to continuous, never-ending billing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834944305/c393ee3c-fe19-4787-9348-0a2d01fbf9d3.jpeg" alt class="image--center mx-auto" /></p>
<p>Peter's solution: Turn off EC2 instances and RDS during off-hours and at night using EventBridge Scheduler and auto-shutdown features. This significantly reduced costs by nearly half.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834949318/0b689871-d803-4ea6-b454-9f2a96aa27e9.jpeg" alt class="image--center mx-auto" /></p>
<p>Phase 6: Gaining Popularity and the Need for Auto Scaling</p>
<p>As the application gained popularity, it attracted a large audience. During exam periods, there were sudden, unpredictable spikes in traffic. These huge surges caused the server to crash, even though it operated within normal limits on typical days.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834955782/3743bc12-fef7-4faf-91b9-a2fc1e57cdd2.jpeg" alt class="image--center mx-auto" /></p>
<p>To address this challenge, Peter implements an auto-scaling architecture that dynamically adjusts server capacity based on real-time metrics. He utilizes two key AWS services: <strong>Auto Scaling Groups (ASG)</strong> and <strong>Application Load Balancers (ALB)</strong>. The ASG automatically increases or decreases the number of EC2 instances in response to fluctuating demand, while the ALB efficiently distributes incoming traffic across these instances. This setup ensures optimal performance during traffic spikes, such as exam periods, while maintaining cost-effectiveness during periods of lower usage.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731834977221/e87ce9f3-6184-4b62-8835-3716a07f3d5b.jpeg" alt class="image--center mx-auto" /></p>
<p>Chain of Solutions: As we solve one problem, a new challenge often emerges, requiring another solution. This cycle of problem-solving and adaptation has been a key feature of peter, Shuri’s journey so far.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731835006890/ef4276c4-5ad2-45c6-bf7b-0bb7f19f1946.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Even after implementing autoscaling, the application was slowing down despite CPU and memory usage being within normal limits.</p>
</li>
<li><p>Peter examined the statistics provided by AWS CloudWatch logs and alarms. He noticed that the database response time was increasing, indicating that the function responsible for fetching data from RDS was becoming a bottleneck due to numerous simultaneous requests.</p>
</li>
<li><p>To address this, Shuri restructured the application into a microservice architecture. This allowed the module responsible for fetching data to be scaled and updated independently without causing system-wide downtime.</p>
</li>
<li><p>The team deployed the new architecture to Amazon ECS using Fargate for automated, managed scaling.</p>
</li>
</ul>
<p>Summary:</p>
<p>This story illustrates the overall DevOps process, demonstrating how DevOps culture accelerates software delivery to users. It specifically showcases the journey from basic to advanced implementations using AWS services.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731835016566/2897b1e8-83a7-482b-aada-8545114bf24e.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731835030600/96eced55-64c2-433b-8ed1-0d4ace6198a4.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Behind the Scenes and Onstage: My Journey as Organizer and Speaker at Nepal's First AWS Student Community Day 2024]]></title><description><![CDATA[Namaste!
In this blog, I’m excited to share my journey as both an organizer and speaker at Nepal’s inaugural Student Community Day, held on September 29, 2024. This event was a landmark gathering that brought together passionate students and professi...]]></description><link>https://lajahshrestha.com.np/behind-the-scenes-and-onstage-my-journey-as-organizer-and-speaker-at-nepals-first-aws-student-community-day-2024</link><guid isPermaLink="true">https://lajahshrestha.com.np/behind-the-scenes-and-onstage-my-journey-as-organizer-and-speaker-at-nepals-first-aws-student-community-day-2024</guid><category><![CDATA[AWSStudentCommunityDay]]></category><category><![CDATA[#lajahshrestha]]></category><dc:creator><![CDATA[Lajah Shrestha]]></dc:creator><pubDate>Tue, 29 Oct 2024 18:15:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736353995524/f1bec010-cf38-462a-875b-521585be9986.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Namaste!</p>
<p>In this blog, I’m excited to share my journey as both an organizer and speaker at Nepal’s inaugural Student Community Day, held on September 29, 2024. This event was a landmark gathering that brought together passionate students and professionals eager to connect, learn, and celebrate the power of technology and community.</p>
<hr />
<h2 id="heading-pre-event-planning">Pre-Event Planning</h2>
<p>Our team spent months meticulously preparing for the event. Originally scheduled for October 5, we realized that date fell during Dashain, a major festival in Nepal. Many students would likely have left the Kathmandu Valley to celebrate with family, so we decided to reschedule for Sunday, September 29, to ensure maximum participation and accommodate our VIP speakers.</p>
<h3 id="heading-venue-challenges">Venue Challenges:</h3>
<p>As we shifted the date, unexpected challenges arose. We had arranged to host the event at St. Xavier’s College in Maitighar, where a Saturday event would not interfere with regular classes. However, moving the event to a Sunday created logistical hurdles, as classes would be in session. The college was initially hesitant due to concerns about managing crowds, parking, and catering. After extensive planning and coordination with the venue, we finally secured the location.</p>
<p>Unexpected Obstacles: Just days before the event, Kathmandu experienced relentless rainfall, leading to widespread flooding. My home’s ground floor was flooded, and like many of our team members, I couldn’t reach the college on the 28th for final preparations. Additionally, heavy rainfall caused landslides, preventing out-of-valley students we had sponsored from traveling to Kathmandu. As the rain continued, our team felt the weight of months of preparation slipping out of reach, yet we persevered, determined to make the event a success.</p>
<h2 id="heading-event-day"><strong>Event Day</strong></h2>
<p>September 29 was the big day – the culmination of our hard work. I arrived at the college early, relieved to see that the venue setup team had completed their work despite the previous day’s rain. I was soon busy welcoming VIPs, guests, and speakers, ensuring everyone felt at ease.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730168365455/7f16de6f-9bf1-4e38-b1e6-3e88c8568fe3.jpeg" alt class="image--center mx-auto" /></p>
<p>As I made my way to the main event area, I was thrilled to see the venue filled with attendees. Every seat in the front section was taken, with the crowd extending back to the lower court. Despite the obstacles, the turnout was overwhelming and heartwarming.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730168308264/8ab21df5-20fc-4106-a495-7d44000ddf96.jpeg" alt /></p>
<p>The event commenced with the National Anthem, followed by a warm welcome from the college principal, who introduced our esteemed keynote speakers: Mr. Sambit Bhattrai and Mrs. Jen Looper from Amazon (AWS), and Mr. Sanjeev Pant from PM Square. The day was packed with inspiring talks, engaging activities, quizzes, prize and CTF(Capture the Flag) competition. Know more about it <a target="_blank" href="https://csaju.com/posts/aws-student-community-day-nepal-ctf-writeup/">here.</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730169971719/c5c39b54-8b59-4235-b527-95089ed84401.jpeg" alt /></p>
<h3 id="heading-my-session">My Session</h3>
<p>I was honored to lead a breakout session on "Accelerating the Software Lifecycle with AWS and the Role of DevOps." As this was my first time delivering a technical session to such a large audience, I put in extra effort to create an engaging experience. Rather than a typical presentation, I crafted my session as a narrative, weaving in characters from the Marvel Cinematic Universe (MCU) to illustrate key concepts. This approach captured the audience’s attention, making complex ideas more relatable and enjoyable. Know more about the session on second part of the blog <a target="_blank" href="https://lajahshrestha.com.np/accelerating-software-delivery-with-aws">here</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730168336835/58e859b9-d087-4358-8414-b341bf751fd6.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Reflections on the Event</strong> :The event was a resounding success, with over 750 attendees. One of the standout moments amidst all the challenges was when, after my session, an audience member approached me to share that she previously held a negative view of DevOps due to a past experience. I was delighted to have changed her perspective, revealing the true potential and collaborative nature of DevOps.</p>
<p>In the end, despite the unexpected challenges, Nepal’s first Student Community Day surpassed our expectations. The passion and dedication of everyone involved made it a memorable experience, and I’m immensely proud of what we achieved. This event was more than just a gathering; it was a celebration of growth, learning, and the spirit of community.</p>
<p>Thank you for reading, and stay tuned for more insights into upcoming events!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730169245758/2a7ab7a2-98cc-4759-a3c7-3c12446a97d5.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730169143497/44f17d1e-2d7d-4fb1-a206-5649f940a0a4.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730169459588/48e57b9a-c033-403f-9269-b8998956080a.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://media.licdn.com/dms/image/v2/D4D22AQGJf_6JvHerQA/feedshare-shrink_2048_1536/feedshare-shrink_2048_1536/0/1727846331495?e=1733356800&amp;v=beta&amp;t=hyh__d7PiEUaF8hZPoxLq1QUQR2rtL75w2ZMfgWkfv8" alt="No alt text provided for this image" /></p>
]]></content:encoded></item><item><title><![CDATA[Streamlined and Secure: Building a Modern CI/CD Pipeline on AWS with Open Source Tools(Jenkins, ArgoCD, Prometheus,Grafana, Trivy , SonarQube)]]></title><description><![CDATA[This project demonstrates how to deploy a Netflix clone application using a DevSecOps approach on Amazon Web Services (AWS) using Jenkins for CI, Prometheus& Grafana for monitoring, SonarQube & Trivy for security Checks, and later deploying it in K8s...]]></description><link>https://lajahshrestha.com.np/streamlined-and-secure-building-a-modern-cicd-pipeline-on-aws-with-open-source-toolsjenkins-argocd-prometheusgrafana-trivy-sonarqube</link><guid isPermaLink="true">https://lajahshrestha.com.np/streamlined-and-secure-building-a-modern-cicd-pipeline-on-aws-with-open-source-toolsjenkins-argocd-prometheusgrafana-trivy-sonarqube</guid><category><![CDATA[DevSecOps]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[monitoring]]></category><dc:creator><![CDATA[Lajah Shrestha]]></dc:creator><pubDate>Sun, 31 Mar 2024 15:18:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711897114667/c95584b0-2161-4958-927d-046f646ef3b5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This project demonstrates how to deploy a Netflix clone application using a DevSecOps approach on Amazon Web Services (AWS) using Jenkins for CI, Prometheus&amp; Grafana for monitoring, SonarQube &amp; Trivy for security Checks, and later deploying it in K8s Ckuster using ArgoCD as Gitops tool. This project has been published by <a target="_blank" href="https://www.linkedin.com/in/nasiullha-chaudhari-5a80601a8/overlay/about-this-profile/?lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base%3BOxgh5Y3xS2CSoXMxKFPAfw%3D%3D">Nasiullha Chaudhari</a> which can be found <a target="_blank" href="https://github.com/N4si/DevSecOps-Project">here</a> . This blog post builds upon the valuable foundation provided by the project's GitHub README, offering a deeper exploration of the implementation process.</p>
<p>Before we dive deep into the project lets breakdown the project into the phases in which the actions are to be performed:</p>
<p><strong>Phase 1: Initial Setup and Deployment</strong></p>
<p>Setup an EC2 Instance and run the application in docker container</p>
<p><strong>Phase 2: Security</strong></p>
<p>Implement Security checks using scanning tools</p>
<p><strong>Phase 3: CI/CD Setup</strong></p>
<ol>
<li><p><strong>Install Necessary Plugins:</strong> Jenkins plugins are installed for tools like SonarQube Scanner, NodeJS, and dependency management. Credentials for DockerHub are also added securely.</p>
<ol>
<li><strong>Install Jenkins:</strong> This is used to automate the deployment process.</li>
</ol>
</li>
<li><p><strong>Configure CI/CD Pipeline:</strong> A Jenkins pipeline is created to automate the following stages:</p>
<ul>
<li><p>Clean workspace</p>
</li>
<li><p>Checkout code from Git repository</p>
</li>
<li><p>SonarQube Analysis: Code is scanned for vulnerabilities and quality issues using SonarQube.</p>
</li>
<li><p>Quality Gate: The pipeline can be halted if quality thresholds are not met.</p>
</li>
<li><p>Install Dependencies: Any additional dependencies required by the application are installed.</p>
</li>
<li><p>OWASP FS SCAN: The code is scanned for potential vulnerabilities in dependencies using the OWASP Dependency-Check plugin.</p>
</li>
<li><p>TRIVY FS SCAN: The container image is scanned for vulnerabilities using Trivy.</p>
</li>
<li><p>Docker Build &amp; Push: The application is built as a Docker image, tagged with your credentials, and pushed to a Docker registry.</p>
</li>
<li><p>TRIVY: The pushed image is scanned again for vulnerabilities using Trivy.</p>
</li>
<li><p>Deploy to container: The application container is deployed and runs on the server.</p>
</li>
</ul>
</li>
</ol>
<p><strong>Phase 4: Monitoring</strong></p>
<ol>
<li><p><strong>Install Prometheus and Grafana:</strong> These tools are used to monitor the application's performance and health. Prometheus collects metrics, while Grafana visualizes them for easy analysis.</p>
</li>
<li><p><strong>Configure Prometheus:</strong> Prometheus is configured to scrape metrics from the application and other relevant sources like Node Exporter (which collects system metrics from the server).</p>
</li>
<li><p><strong>Install Node Exporter:</strong> This tool is installed to collect system metrics from the server running the application.</p>
</li>
<li><p><strong>Configure Prometheus Plugin Integration:</strong> Jenkins can be integrated with Prometheus to monitor the CI/CD pipeline itself.</p>
</li>
<li><p><strong>Install Grafana:</strong> This is installed and configured to display visualizations of the collected metrics from Prometheus.</p>
</li>
</ol>
<p><strong>Phase 5: Notification</strong></p>
<ol>
<li><strong>Implement Notification Services:</strong> This could involve setting up email notifications or other mechanisms to receive alerts when issues arise.</li>
</ol>
<p><strong>Phase 6: Kubernetes (Optional)</strong></p>
<p>This phase covers deploying the application to a scalable environment using Kubernetes, a container orchestration platform. It also involves integrating monitoring with Prometheus and Node Exporter within Kubernetes.</p>
<p><a target="_blank" href="https://www.linkedin.com/in/nasiullha-chaudhari-5a80601a8/overlay/about-this-profile/?lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base%3BOxgh5Y3xS2CSoXMxKFPAfw%3D%3D">  
</a></p>
<h2 id="heading-1-hardware-reqhttpswwwlinkedincominnasiullha-chaudhari-5a80601a8overlayabout-this-profilelipiurn3ali3apage3adflagship3profileviewbase3boxgh5y3xs2csoxmxkfpafw3d3duirements-server-conhttpswwwlinkedincominnasiullha-chaudhari-5a80601a8overlayabout-this-profilelipiurn3ali3apage3adflagship3profileviewbase3boxgh5y3xs2csoxmxkfpafw3d3dfiguration-and-port-on-which-each-service-is-configured-to-run"><a target="_blank" href="https://www.linkedin.com/in/nasiullha-chaudhari-5a80601a8/overlay/about-this-profile/?lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base%3BOxgh5Y3xS2CSoXMxKFPAfw%3D%3D">1. Hardware Req</a>ui<a target="_blank" href="https://www.linkedin.com/in/nasiullha-chaudhari-5a80601a8/overlay/about-this-profile/?lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base%3BOxgh5Y3xS2CSoXMxKFPAfw%3D%3D">rements: Server Con</a>figuration and port on which each service is configured to run:</h2>
<pre><code class="lang-jsx">Server <span class="hljs-number">1</span>(Application/Pipeline Instance) : T2.large
<span class="hljs-number">1.</span> Jenkins(<span class="hljs-number">8080</span>)
<span class="hljs-number">2.</span> sonarqube(<span class="hljs-number">9000</span>)
<span class="hljs-number">3.</span> Netflix Container(<span class="hljs-number">8081</span>)
---------------------------------------------------------------------=
On server <span class="hljs-number">2</span>(Monitoring Instance): T2.medium
<span class="hljs-number">3.</span> Node Exporteer(<span class="hljs-number">9100</span>)
<span class="hljs-number">4.</span> Prometheus(<span class="hljs-number">9090</span>) -&gt; yaml configuration
<span class="hljs-number">5.</span> Grafana(<span class="hljs-number">3000</span>) data source
</code></pre>
<hr />
<h2 id="heading-2-application-architecture">2. Application Architecture:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711896682786/b98dfb30-3ec0-487e-9c31-6d4eaf53d38a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-4-implementation">4. Implementation.</h2>
<h3 id="heading-phase-1-initial-setup-and-deployment"><strong>Phase 1: Initial Setup and Deployment</strong></h3>
<p>Instance: <strong>i-0a9b2965ff3babddd (devSecOps)</strong></p>
<p><strong>Step 1: Launch EC2 (Ubuntu 22.04):</strong></p>
<ul>
<li><p>Provision an EC2 instance on AWS with Ubuntu 22.04.</p>
</li>
<li><p>Connect to the instance using SSH.</p>
</li>
</ul>
<p><strong>Step 2: Clone the Code:</strong></p>
<ul>
<li><p>Update all the packages and then clone the code.</p>
</li>
<li><p>Clone your application's code repository onto the EC2 instance:</p>
<pre><code class="lang-plaintext">  git clone https://github.com/N4si/DevSecOps-Project.git
</code></pre>
</li>
</ul>
<p><strong>Step 3: Install Docker and Run the App Using a Container:</strong></p>
<ul>
<li><p>Set up Docker on the EC2 instance:</p>
<pre><code class="lang-plaintext">  sudo apt-get update
  sudo apt-get install docker.io -y
  sudo usermod -aG docker $USER  # Replace with your system's username, e.g., 'ubuntu'
  newgrp docker
  sudo chmod 777 /var/run/docker.sock
</code></pre>
</li>
<li><p>Build and run your application using Docker containers:</p>
<pre><code class="lang-plaintext">  docker build -t netflix .
  docker run -d --name netflix -p 8081:80 netflix:latest

  #to delete
  docker stop &lt;containerid&gt;
  docker rmi -f netflix
</code></pre>
</li>
</ul>
<p>Once the container is run we can access the website but the page is empty because the contents are being delivered via an external API(TMDB) which were supposed to pass through the argument in our docker command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711897289611/55a042f9-399c-44a7-9109-1aa7251087d0.png" alt class="image--center mx-auto" /></p>
<p><strong>TMDB</strong>: The movie database api credentials(personal) for the netflix clone website</p>
<p><strong>Step 4: Get the API Key:</strong></p>
<ul>
<li><p>Open a web browser and navigate to TMDB (The Movie Database) website.</p>
</li>
<li><p>Click on "Login" and create an account.</p>
</li>
<li><p>Once logged in, go to your profile and select "Settings."</p>
</li>
<li><p>Click on "API" from the left-side panel.</p>
</li>
<li><p>Create a new API key by clicking "Create" and accepting the terms and conditions.</p>
</li>
<li><p>Provide the required basic details and click "Submit."</p>
</li>
<li><p>You will receive your TMDB API key.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711897407000/05044281-17ae-4af3-b2d2-3f198a7cd0db.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Now recreate the Docker image with your api key:</p>
<pre><code class="lang-plaintext">docker build --build-arg TMDB_V3_API_KEY=&lt;your-api-key&gt; -t netflix .
</code></pre>
<pre><code class="lang-jsx">sudo docker run -d -p <span class="hljs-number">8081</span>:<span class="hljs-number">80</span> netflix2
docker run -d --name sonar -p <span class="hljs-number">9000</span>:<span class="hljs-number">9000</span> sonarqube:lts-community
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711897423995/93a7c612-4560-46b6-9726-e676f20e3b52.png" alt class="image--center mx-auto" /></p>
<p><strong>Phase 2: Security</strong></p>
<ol>
<li><p><strong>Install SonarQube and Trivy:</strong></p>
<ul>
<li><p>Install SonarQube and Trivy on the EC2 instance to scan for vulnerabilities.</p>
<p>  sonarqube</p>
<pre><code class="lang-plaintext">  docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
</code></pre>
<p>  To access:</p>
<p>  publicIP:9000 (by default username &amp; password is admin)</p>
<p>  To install Trivy:</p>
<pre><code class="lang-plaintext">  sudo apt-get install wget apt-transport-https gnupg lsb-release
  wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
  echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
  sudo apt-get update
  sudo apt-get install trivy
</code></pre>
<p>  to scan image using trivy</p>
<pre><code class="lang-plaintext">  trivy image &lt;imageid&gt;
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Integrate SonarQube and Configure:</strong></p>
<ul>
<li><p>Integrate SonarQube with your CI/CD pipeline.</p>
</li>
<li><p>Configure SonarQube to analyze code for quality and security issues.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711897463942/ed41ce6c-bf5b-4e07-98e5-f5b4f4efd55f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-phase-3-cicd-setup"><strong>Phase 3: CI/CD Setup</strong></h3>
<ol>
<li><p><strong>Install Jenkins for Automation:</strong></p>
<ul>
<li>Install Jenkins on the EC2 instance to automate deployment: Install Java</li>
</ul>
</li>
</ol>
<pre><code class="lang-plaintext">    sudo apt update
    sudo apt install fontconfig openjdk-17-jre
    java -version
    openjdk version "17.0.8" 2023-07-18
    OpenJDK Runtime Environment (build 17.0.8+7-Debian-1deb12u1)
    OpenJDK 64-Bit Server VM (build 17.0.8+7-Debian-1deb12u1, mixed mode, sharing)

    #jenkins
    sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
    https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
    echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
    https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
    /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
    sudo apt-get update
    sudo apt-get install jenkins
    sudo systemctl start jenkins
    sudo systemctl enable jenkins
</code></pre>
<ul>
<li><p>Access Jenkins in a web browser using the public IP of your EC2 instance.</p>
<p>  publicIp:8080</p>
</li>
</ul>
<ol start="2">
<li><strong>Install Necessary Plugins in Jenkins:</strong></li>
</ol>
<p>Goto Manage Jenkins →Plugins → Available Plugins →</p>
<p>Install below plugins</p>
<p>1 Eclipse Temurin Installer (Install without restart)</p>
<p>2 SonarQube Scanner (Install without restart)</p>
<p>3 NodeJs Plugin (Install Without restart)</p>
<p>4 Email Extension Plugin</p>
<h3 id="heading-configure-java-and-nodejs-in-global-tool-configuration"><strong>Configure Java and Nodejs in Global Tool Configuration</strong></h3>
<p>Goto Manage Jenkins → Tools → Install JDK(17) and NodeJs(16)→ Click on Apply and Save</p>
<h3 id="heading-sonarqube">SonarQube</h3>
<p>Create the token</p>
<p>Goto Jenkins Dashboard → Manage Jenkins → Credentials → Add Secret Text. It should look like this</p>
<p>After adding sonar token</p>
<p>Click on Apply and Save</p>
<p><strong>The Configure System option</strong> is used in Jenkins to configure different server</p>
<p><strong>Global Tool Configuration</strong> is used to configure different tools that we install using Plugins</p>
<p>We will install a sonar scanner in the tools.</p>
<p>Create a Jenkins webhook</p>
<ol>
<li><strong>Configure CI/CD Pipeline in Jenkins:</strong></li>
</ol>
<ul>
<li>Create a CI/CD pipeline in Jenkins to automate your application deployment.</li>
</ul>
<p>Certainly, here are the instructions without step numbers:</p>
<p><strong>Install Dependency-Check and Docker Tools in Jenkins</strong></p>
<p><strong>Install Dependency-Check Plugin:</strong></p>
<ul>
<li><p>Go to "Dashboard" in your Jenkins web interface.</p>
</li>
<li><p>Navigate to "Manage Jenkins" → "Manage Plugins."</p>
</li>
<li><p>Click on the "Available" tab and search for "OWASP Dependency-Check."</p>
</li>
<li><p>Check the checkbox for "OWASP Dependency-Check" and click on the "Install without restart" button.</p>
</li>
</ul>
<p><strong>Configure Dependency-Check Tool:</strong></p>
<ul>
<li><p>After installing the Dependency-Check plugin, you need to configure the tool.</p>
</li>
<li><p>Go to "Dashboard" → "Manage Jenkins" → "Global Tool Configuration."</p>
</li>
<li><p>Find the section for "OWASP Dependency-Check."</p>
</li>
<li><p>Add the tool's name, e.g., "DP-Check."</p>
</li>
<li><p>Save your settings.</p>
</li>
</ul>
<p><strong>Install Docker Tools and Docker Plugins:</strong></p>
<ul>
<li><p>Go to "Dashboard" in your Jenkins web interface.</p>
</li>
<li><p>Navigate to "Manage Jenkins" → "Manage Plugins."</p>
</li>
<li><p>Click on the "Available" tab and search for "Docker."</p>
</li>
<li><p>Check the following Docker-related plugins:</p>
<ul>
<li><p>Docker</p>
</li>
<li><p>Docker Commons</p>
</li>
<li><p>Docker Pipeline</p>
</li>
<li><p>Docker API</p>
</li>
<li><p>docker-build-step</p>
</li>
</ul>
</li>
<li><p>Click on the "Install without restart" button to install these plugins.</p>
</li>
</ul>
<p><strong>Add DockerHub Credentials:</strong></p>
<ul>
<li><p>To securely handle DockerHub credentials in your Jenkins pipeline, follow these steps:</p>
<ul>
<li><p>Go to "Dashboard" → "Manage Jenkins" → "Manage Credentials."</p>
</li>
<li><p>Click on "System" and then "Global credentials (unrestricted)."</p>
</li>
<li><p>Click on "Add Credentials" on the left side.</p>
</li>
<li><p>Choose "Secret text" as the kind of credentials.</p>
</li>
<li><p>Enter your DockerHub credentials (Username and Password) and give the credentials an ID (e.g., "docker").</p>
</li>
<li><p>Click "OK" to save your DockerHub credentials.</p>
</li>
</ul>
</li>
</ul>
<p>Now, you have installed the Dependency-Check plugin, configured the tool, and added Docker-related plugins along with your DockerHub credentials in Jenkins. You can now proceed with configuring your Jenkins pipeline to include these tools and credentials in your CI/CD process.</p>
<pre><code class="lang-jsx">
pipeline{
    agent any
    tools{
        jdk <span class="hljs-string">'jdk17'</span>
        nodejs <span class="hljs-string">'node16'</span>
    }
    environment {
        SCANNER_HOME=tool <span class="hljs-string">'sonar-scanner'</span>
    }
    stages {
        stage(<span class="hljs-string">'clean workspace'</span>){
            steps{
                cleanWs()
            }
        }
        stage(<span class="hljs-string">'Checkout from Git'</span>){
            steps{
                git branch: <span class="hljs-string">'main'</span>, <span class="hljs-attr">url</span>: <span class="hljs-string">'&lt;https://github.com/LajahShrestha/DevSecOps-Project.git&gt;'</span>
            }
        }
        stage(<span class="hljs-string">"Sonarqube Analysis "</span>){
            steps{
                withSonarQubeEnv(<span class="hljs-string">'sonar-server'</span>) {
                    sh <span class="hljs-string">''</span><span class="hljs-string">' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \\
                    -Dsonar.projectKey=Netflix '</span><span class="hljs-string">''</span>
                }
            }
        }
        stage(<span class="hljs-string">"quality gate"</span>){
           steps {
                script {
                    waitForQualityGate abortPipeline: <span class="hljs-literal">false</span>, <span class="hljs-attr">credentialsId</span>: <span class="hljs-string">'Sonar-token'</span> 
                }
            } 
        }
        stage(<span class="hljs-string">'Install Dependencies'</span>) {
            steps {
                sh <span class="hljs-string">"npm install"</span>
            }
        }
        stage(<span class="hljs-string">'OWASP FS SCAN'</span>) {
            steps {
                dependencyCheck additionalArguments: <span class="hljs-string">'--scan ./ --disableYarnAudit --disableNodeAudit'</span>, <span class="hljs-attr">odcInstallation</span>: <span class="hljs-string">'DP-Check'</span>
                dependencyCheckPublisher pattern: <span class="hljs-string">'**/dependency-check-report.xml'</span>
            }
        }
        stage(<span class="hljs-string">'TRIVY FS SCAN'</span>) {
            steps {
                sh <span class="hljs-string">"trivy fs . &gt; trivyfs.txt"</span>
            }
        }
        stage(<span class="hljs-string">"Docker Build &amp; Push"</span>){
            steps{
                script{
                   withDockerRegistry(credentialsId: <span class="hljs-string">'docker'</span>, <span class="hljs-attr">toolName</span>: <span class="hljs-string">'docker'</span>){   
                       sh <span class="hljs-string">"docker build --build-arg TMDB_V3_API_KEY=8ddb5be8173d1f5790522f4d62bf3937 -t netflix ."</span>
                       sh <span class="hljs-string">"docker tag netflix lajahshrestha/netflix-clone:latest "</span>
                       sh <span class="hljs-string">"docker push lajahshrestha/netflix-clone:latest "</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">"TRIVY"</span>){
            steps{
                sh <span class="hljs-string">"trivy image lajahshrestha/netflix-clone:latest &gt; trivyimage.txt"</span> 
            }
        }
        stage(<span class="hljs-string">'Deploy to container'</span>){
            steps{
                sh <span class="hljs-string">'docker run -d -p 8081:80 lajahshrestha/netflix-clone:latest'</span>
            }
        }
    }
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711897543265/998b2b8c-f0b3-4674-bf43-923519a3fa00.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711897550582/18c01a69-cc05-4384-afbb-1eaa7a0b2afb.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-phase-4-monitoring"><strong>Phase 4: Monitoring</strong></h3>
<p>Prometheus and Grafana are a powerful duo for monitoring applications and infrastructure. Prometheus, the unsung hero, acts as a data collector, constantly gathering metrics on various aspects like CPU usage, memory consumption, or API request counts. It keeps a watchful eye on these metrics, and if they stray from expected levels, it throws up an alert. But raw data can be overwhelming. This is where Grafana steps in. It acts as the visualizer, transforming the data collected by Prometheus into easy-to-understand graphs, charts, and dashboards. You can customize these dashboards to focus on key performance indicators (KPIs) that matter most to you. Together, they provide a clear picture of your application's health and performance, allowing you to identify and address potential issues before they snowball.</p>
<ol>
<li><p><strong>Install Prometheus and Grafana:</strong></p>
<p> Set up Prometheus and Grafana to monitor your application.</p>
<p> <strong>Installing Prometheus:</strong></p>
<p> First, create a dedicated Linux user for Prometheus and download Prometheus:</p>
<pre><code class="lang-plaintext"> sudo useradd --system --no-create-home --shell /bin/false prometheus
 wget &lt;https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz&gt;
</code></pre>
<p> Extract Prometheus files, move them, and create directories:</p>
<pre><code class="lang-plaintext"> tar -xvf prometheus-2.47.1.linux-amd64.tar.gz
 cd prometheus-2.47.1.linux-amd64/
 sudo mkdir -p /data /etc/prometheus
 sudo mv prometheus promtool /usr/local/bin/
 sudo mv consoles/ console_libraries/ /etc/prometheus/
 sudo mv prometheus.yml /etc/prometheus/prometheus.yml
</code></pre>
<p> Set ownership for directories:</p>
<pre><code class="lang-plaintext"> sudo chown -R prometheus:prometheus /etc/prometheus/ /data/
</code></pre>
<p> Create a systemd unit configuration file for Prometheus:</p>
<pre><code class="lang-plaintext"> sudo nano /etc/systemd/system/prometheus.service
</code></pre>
<p> Add the following content to the <code>prometheus.service</code> file:</p>
<pre><code class="lang-plaintext"> [Unit]
 Description=Prometheus
 Wants=network-online.target
 After=network-online.target

 StartLimitIntervalSec=500
 StartLimitBurst=5

 [Service]
 User=prometheus
 Group=prometheus
 Type=simple
 Restart=on-failure
 RestartSec=5s
 ExecStart=/usr/local/bin/prometheus \\
   --config.file=/etc/prometheus/prometheus.yml \\
   --storage.tsdb.path=/data \\
   --web.console.templates=/etc/prometheus/consoles \\
   --web.console.libraries=/etc/prometheus/console_libraries \\
   --web.listen-address=0.0.0.0:9090 \\
   --web.enable-lifecycle

 [Install]
 WantedBy=multi-user.target
</code></pre>
<p> Here's a brief explanation of the key parts in this <code>prometheus.service</code> file:</p>
<ul>
<li><p><code>User</code> and <code>Group</code> specify the Linux user and group under which Prometheus will run.</p>
</li>
<li><p><code>ExecStart</code> is where you specify the Prometheus binary path, the location of the configuration file (<code>prometheus.yml</code>), the storage directory, and other settings.</p>
</li>
<li><p><code>web.listen-address</code> configures Prometheus to listen on all network interfaces on port 9090.</p>
</li>
<li><p><code>web.enable-lifecycle</code> allows for management of Prometheus through API calls.</p>
</li>
</ul>
</li>
</ol>
<p>    Enable and start Prometheus:</p>
<pre><code class="lang-plaintext">    sudo systemctl enable prometheus
    sudo systemctl start prometheus
</code></pre>
<p>    Verify Prometheus's status:</p>
<pre><code class="lang-plaintext">    sudo systemctl status prometheus
</code></pre>
<p>    <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/406d53e2-f0b7-4644-8900-387a607db1fe/Untitled.png" alt="Untitled" /></p>
<p>    You can access Prometheus in a web browser using your server's IP and port 9090:</p>
<p>    <code>http://&lt;your-server-ip&gt;:9090</code></p>
<p>    <strong>Installing Node Exporter:</strong></p>
<p>    Create a system user for Node Exporter and download Node Exporter:</p>
<pre><code class="lang-plaintext">    sudo useradd --system --no-create-home --shell /bin/false node_exporter
    wget &lt;https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz&gt;
</code></pre>
<p>    Extract Node Exporter files, move the binary, and clean up:</p>
<pre><code class="lang-plaintext">    tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz
    sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/
    rm -rf node_exporter*
</code></pre>
<p>    Create a systemd unit configuration file for Node Exporter:</p>
<pre><code class="lang-plaintext">    sudo nano /etc/systemd/system/node_exporter.service
</code></pre>
<p>    Add the following content to the <code>node_exporter.service</code> file:</p>
<pre><code class="lang-plaintext">    [Unit]
    Description=Node Exporter
    Wants=network-online.target
    After=network-online.target

    StartLimitIntervalSec=500
    StartLimitBurst=5

    [Service]
    User=node_exporter
    Group=node_exporter
    Type=simple
    Restart=on-failure
    RestartSec=5s
    ExecStart=/usr/local/bin/node_exporter --collector.logind

    [Install]
    WantedBy=multi-user.target
</code></pre>
<p>    Replace <code>--collector.logind</code> with any additional flags as needed.</p>
<p>    Enable and start Node Exporter:</p>
<pre><code class="lang-plaintext">    sudo systemctl enable node_exporter
    sudo systemctl start node_exporter
</code></pre>
<p>    Verify the Node Exporter's status:</p>
<pre><code class="lang-plaintext">    sudo systemctl status node_exporter
</code></pre>
<p>    You can access Node Exporter metrics in Prometheus.</p>
<ol start="2">
<li><p><strong>Configure Prometheus Plugin Integration:</strong></p>
<p> Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.</p>
<p> <strong>Prometheus Configuration:</strong></p>
<p> To configure Prometheus to scrape metrics from Node Exporter and Jenkins, you need to modify the <code>prometheus.yml</code> file. Here is an example <code>prometheus.yml</code> configuration for your setup:</p>
<pre><code class="lang-plaintext"> scrape_configs:
   - job_name: "prometheus"
     static_configs:
       - targets: ["localhost:9090"]

   - job_name: 'node_exporter'
     static_configs:
       - targets: ['localhost:9100']

   - job_name: 'jenkins'
     metrics_path: '/prometheus'
     static_configs:
       - targets: ['13.232.109.143:8080']
</code></pre>
<p> Make sure to replace <code>&lt;your-jenkins-ip&gt;</code> and <code>&lt;your-jenkins-port&gt;</code> with the appropriate values for your Jenkins setup.</p>
<p> Check the validity of the configuration file:</p>
<pre><code class="lang-plaintext"> promtool check config /etc/prometheus/prometheus.yml
</code></pre>
<p> Reload the Prometheus configuration without restarting:</p>
<pre><code class="lang-plaintext"> curl -X POST &lt;http://localhost:9090/-/reload&gt;
</code></pre>
<p> You can access Prometheus targets at:</p>
<p> <code>http://&lt;your-prometheus-ip&gt;:9090/targets</code></p>
<p> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/8d87a139-55a8-48ab-8df3-5bc765b5c420/Untitled.png" alt="Untitled" /></p>
<p> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/c47cf561-7cea-4de0-8afb-94030d9d1332/Untitled.png" alt="Untitled" /></p>
<p> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/b4d073bb-0360-439b-b459-05b4bea78696/Untitled.png" alt="Untitled" /></p>
</li>
</ol>
<p><strong>Install Grafana on Ubuntu 22.04 and Set it up to Work with Prometheus</strong></p>
<p><strong>Step 1: Install Dependencies:</strong></p>
<p>First, ensure that all necessary dependencies are installed:</p>
<pre><code class="lang-plaintext">sudo apt-get update
sudo apt-get install -y apt-transport-https software-properties-common
</code></pre>
<p><strong>Step 2: Add the GPG Key:</strong></p>
<p>Add the GPG key for Grafana:</p>
<pre><code class="lang-plaintext">wget -q -O - &lt;https://packages.grafana.com/gpg.key&gt; | sudo apt-key add -
</code></pre>
<p><strong>Step 3: Add Grafana Repository:</strong></p>
<p>Add the repository for Grafana stable releases:</p>
<pre><code class="lang-plaintext">echo "deb &lt;https://packages.grafana.com/oss/deb&gt; stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
</code></pre>
<p><strong>Step 4: Update and Install Grafana:</strong></p>
<p>Update the package list and install Grafana:</p>
<pre><code class="lang-plaintext">sudo apt-get update
sudo apt-get -y install grafana
</code></pre>
<p><strong>Step 5: Enable and Start Grafana Service:</strong></p>
<p>To automatically start Grafana after a reboot, enable the service:</p>
<pre><code class="lang-plaintext">sudo systemctl enable grafana-server
</code></pre>
<p>Then, start Grafana:</p>
<pre><code class="lang-plaintext">sudo systemctl start grafana-server
</code></pre>
<p><strong>Step 6: Check Grafana Status:</strong></p>
<p>Verify the status of the Grafana service to ensure it's running correctly:</p>
<pre><code class="lang-plaintext">sudo systemctl status grafana-server
</code></pre>
<p><strong>Step 7: Access Grafana Web Interface:</strong></p>
<p>Open a web browser and navigate to Grafana using your server's IP address. The default port for Grafana is 3000. For example:</p>
<p><code>http://&lt;your-server-ip&gt;:3000</code></p>
<p>You'll be prompted to log in to Grafana. The default username is "admin," and the default password is also "admin."</p>
<p><img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/07b28349-be1d-4f22-86f0-819400dac00e/Untitled.png" alt="Untitled" /></p>
<p><strong>Step 8: Change the Default Password:</strong></p>
<p>When you log in for the first time, Grafana will prompt you to change the default password for security reasons. Follow the prompts to set a new password.</p>
<p><strong>Step 9: Add Prometheus Data Source:</strong></p>
<p>To visualize metrics, you need to add a data source. Follow these steps:</p>
<ul>
<li><p>Click on the gear icon (⚙️) in the left sidebar to open the "Configuration" menu.</p>
</li>
<li><p>Select "Data Sources."</p>
</li>
<li><p>Click on the "Add data source" button.</p>
</li>
<li><p>Choose "Prometheus" as the data source type.</p>
</li>
<li><p>In the "HTTP" section:</p>
<ul>
<li><p>Set the "URL" to <code>http://localhost:9090</code> (assuming Prometheus is running on the same server).</p>
</li>
<li><p>Click the "Save &amp; Test" button to ensure the data source is working.</p>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/0ee2321c-eed8-4264-a54c-65ab4e53e755/Untitled.png" alt="Untitled" /></p>
<p><strong>Step 10: Import a Dashboard:</strong></p>
<p>To make it easier to view metrics, you can import a pre-configured dashboard. Follow these steps:</p>
<ul>
<li><p>Click on the "+" (plus) icon in the left sidebar to open the "Create" menu.</p>
</li>
<li><p>Select "Dashboard."</p>
</li>
<li><p>Click on the "Import" dashboard option.</p>
</li>
<li><p>Enter the dashboard code you want to import (e.g., code 1860).</p>
</li>
<li><p>Click the "Load" button.</p>
</li>
<li><p>Select the data source you added (Prometheus) from the dropdown.</p>
</li>
<li><p>Click on the "Import" button.</p>
</li>
</ul>
<p>You should now have a Grafana dashboard set up to visualize metrics from Prometheus.</p>
<p><img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/bcdba78d-f288-42da-9115-4c909463c61d/Untitled.png" alt="Untitled" /></p>
<p>Grafana is a powerful tool for creating visualizations and dashboards, and you can further customize it to suit your specific monitoring needs.</p>
<p>That's it! You've successfully installed and set up Grafana to work with Prometheus for monitoring and visualization.</p>
<ol>
<li><p><strong>Configure Prometheus Plugin Integration:</strong></p>
<ul>
<li>Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.</li>
</ul>
</li>
</ol>
<p>    <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/d0304725-49f8-4382-bca5-72d3d34a0577/Untitled.png" alt="Untitled" /></p>
<p><img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/946ce4eb-f5e1-4422-b914-6d2ab0bcf71f/Untitled.png" alt="Untitled" /></p>
<p><strong>Phase 5: Notification</strong></p>
<ol>
<li><p><strong>Implement Notification Services:</strong></p>
<ul>
<li>Set up email notifications in Jenkins or other notification mechanisms.</li>
</ul>
</li>
</ol>
<p>    <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/2b6e24b2-d461-4715-b119-00ceea61b072/Untitled.png" alt="Untitled" /></p>
<pre><code class="lang-jsx">    lajah.aws@gmail.com
    ktut rfsu nkog avvt
</code></pre>
<p>    <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/67df652a-97c6-4ab4-a32e-72238eac9f8d/Untitled.png" alt="Untitled" /></p>
<p>    <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/74341641-7150-4a30-b048-ed32fbd55682/0e081cb1-4c82-4003-a634-190a162d161c/Untitled.png" alt="Untitled" /></p>
<p><strong>Phase 6: Kubernetes</strong></p>
<p><strong>Create Kubernetes Cluster with Nodegroups</strong></p>
<p>In this phase, you'll set up a Kubernetes cluster with node groups. This will provide a scalable environment to deploy and manage your applications.</p>
<p><strong>Monitor Kubernetes with Prometheus</strong></p>
<p>Prometheus is a powerful monitoring and alerting toolkit, and you'll use it to monitor your Kubernetes cluster. Additionally, you'll install the node exporter using Helm to collect metrics from your cluster nodes.</p>
<p><strong>Install Node Exporter using Helm</strong></p>
<p>To begin monitoring your Kubernetes cluster, you'll install the Prometheus Node Exporter. This component allows you to collect system-level metrics from your cluster nodes. Here are the steps to install the Node Exporter using Helm:</p>
<ol>
<li><p>Add the Prometheus Community Helm repository:</p>
<pre><code class="lang-plaintext"> helm repo add prometheus-community &lt;https://prometheus-community.github.io/helm-charts&gt;
</code></pre>
</li>
<li><p>Create a Kubernetes namespace for the Node Exporter:</p>
<pre><code class="lang-plaintext"> kubectl create namespace prometheus-node-exporter
</code></pre>
</li>
<li><p>Install the Node Exporter using Helm:</p>
<pre><code class="lang-plaintext"> helm install prometheus-node-exporter prometheus-community/prometheus-node-exporter --namespace prometheus-node-exporter
</code></pre>
</li>
</ol>
<p>Add a Job to Scrape Metrics on nodeip:9001/metrics in prometheus.yml:</p>
<p>Update your Prometheus configuration (prometheus.yml) to add a new job for scraping metrics from nodeip:9001/metrics. You can do this by adding the following configuration to your prometheus.yml file:</p>
<pre><code class="lang-plaintext">  - job_name: 'Netflix'
    metrics_path: '/metrics'
    static_configs:
      - targets: ['node1Ip:9100']
</code></pre>
<p>Replace 'your-job-name' with a descriptive name for your job. The static_configs section specifies the targets to scrape metrics from, and in this case, it's set to nodeip:9001.</p>
<p>Don't forget to reload or restart Prometheus to apply these changes to your configuration.</p>
<p>To deploy an application with ArgoCD, you can follow these steps, which I'll outline in Markdown format:</p>
<p><strong>Deploy Application with ArgoCD</strong></p>
<ol>
<li><p><strong>Install ArgoCD:</strong></p>
<p> You can install ArgoCD on your Kubernetes cluster by following the instructions provided in the <a target="_blank" href="https://archive.eksworkshop.com/intermediate/290_argocd/install/">EKS Workshop</a> documentation.</p>
</li>
<li><p><strong>Set Your GitHub Repository as a Source:</strong></p>
<p> After installing ArgoCD, you need to set up your GitHub repository as a source for your application deployment. This typically involves configuring the connection to your repository and defining the source for your ArgoCD application. The specific steps will depend on your setup and requirements.</p>
</li>
<li><p><strong>Create an ArgoCD Application:</strong></p>
<ul>
<li><p><code>name</code>: Set the name for your application.</p>
</li>
<li><p><code>destination</code>: Define the destination where your application should be deployed.</p>
</li>
<li><p><code>project</code>: Specify the project the application belongs to.</p>
</li>
<li><p><code>source</code>: Set the source of your application, including the GitHub repository URL, revision, and the path to the application within the repository.</p>
</li>
<li><p><code>syncPolicy</code>: Configure the sync policy, including automatic syncing, pruning, and self-healing.</p>
</li>
</ul>
</li>
<li><p><strong>Access your Application</strong></p>
<ul>
<li>To Access the app make sure port 30007 is open in your security group and then open a new tab paste your NodeIP:30007, your app should be running.</li>
</ul>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>    By implementing a DevSecOps approach with tools like Prometheus and Grafana, you gain a comprehensive view of your application's security, performance, and health. This allows for proactive management, ensuring a secure and well-functioning application throughout its lifecycle. Remember, DevSecOps is an ongoing process, so continuously refine your approach as your application evolves and make sure to cleanup the resources used while doing the process to cut off ongoing charges from the vendor. Happy learning.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring AWS Deployment Diversity]]></title><description><![CDATA[In the expansive realm of cloud computing, Amazon Web Services (AWS) emerges as a formidable force, offering an extensive suite of services designed to cater to the multifaceted needs of businesses and developers alike. As we embark on our exploratio...]]></description><link>https://lajahshrestha.com.np/exploring-aws-deployment-diversity</link><guid isPermaLink="true">https://lajahshrestha.com.np/exploring-aws-deployment-diversity</guid><dc:creator><![CDATA[Lajah Shrestha]]></dc:creator><pubDate>Tue, 26 Dec 2023 16:30:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1703605938386/9d947ed6-e346-4967-b2c8-4ce6ca163725.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the expansive realm of cloud computing, Amazon Web Services (AWS) emerges as a formidable force, offering an extensive suite of services designed to cater to the multifaceted needs of businesses and developers alike. As we embark on our exploration of AWS deployment options, it is imperative to grasp the foundational services that underpin this cloud ecosystem.</p>
<p>AWS encompasses a myriad of cloud computing services, spanning computing power, storage, databases, machine learning, and more. These services collectively empower organizations to build, deploy, and scale applications with unparalleled flexibility and efficiency.</p>
<hr />
<h3 id="heading-b-understanding-deployment-needs"><strong>B. Understanding Deployment Needs</strong></h3>
<p>To navigate the AWS cloud effectively, it is crucial to discern the factors influencing deployment choices. Scalability, ease of management, and resource optimization are pivotal considerations in crafting a deployment strategy tailored to specific project requirements. As we delve into the intricacies of AWS deployment options, we will explore how each method aligns with these critical considerations.</p>
<h2 id="heading-aws-deployment-methods"><strong>AWS Deployment Methods</strong></h2>
<h3 id="heading-a-traditional-deployment-on-ec2"><strong>A. Traditional Deployment on EC2</strong></h3>
<p>Traditional deployment on Elastic Compute Cloud (EC2) instances involves manually deploying code on virtual machines. This method provides a high level of control over the infrastructure, making it suitable for scenarios where fine-tuning at the infrastructure level is paramount. It is particularly favored in applications where specific configurations and dedicated resources are essential.</p>
<p><img src="https://docs.aws.amazon.com/images/AWSEC2/latest/UserGuide/images/ec2-basic-arch.png" alt="
A basic architecture diagram of an EC2 instance within a VPC.
" /></p>
<h4 id="heading-pros">Pros:</h4>
<ul>
<li><p><strong>Fine-Grained Control:</strong> Traditional deployment on EC2 provides developers with complete control over the underlying infrastructure, allowing for customized configurations and security settings.</p>
</li>
<li><p><strong>Versatility:</strong> Suitable for a wide range of applications, especially those with specific infrastructure requirements.</p>
</li>
<li><p><strong>Legacy Application Support:</strong> Ideal for hosting legacy applications that may not be easily containerized.</p>
</li>
</ul>
<h3 id="heading-b-containerization-on-ec2"><strong>B. Containerization on EC2</strong></h3>
<p>Containerization, epitomized by Docker, has transformed the deployment landscape. AWS supports this paradigm through services like Docker on EC2. Containerization allows for packaging applications and their dependencies, ensuring consistency across different environments. This method is preferred in microservices architectures, providing agility and scalability by encapsulating services in lightweight, portable containers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703607743026/00cab7ae-e0d4-486f-a383-1101f42c50bc.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-pros-1">Pros:</h4>
<ul>
<li><p><strong>Portability:</strong> Containers encapsulate applications and dependencies, ensuring consistency across different environments and easing deployment across various platforms.</p>
</li>
<li><p><strong>Scalability:</strong> Well-suited for microservices architectures, allowing each service to scale independently, enhancing overall system scalability.</p>
</li>
<li><p><strong>Resource Efficiency:</strong> Containers share the host OS kernel, leading to more efficient resource utilization compared to traditional VMs.</p>
</li>
</ul>
<h3 id="heading-c-aws-elastic-beanstalk"><strong>C. AWS Elastic Beanstalk</strong></h3>
<p>AWS Elastic Beanstalk abstracts away infrastructure details, streamlining the deployment process for developers. It is an excellent choice for projects where simplicity and rapid deployment are prioritized. Elastic Beanstalk is particularly suitable for web applications and services where developers want to focus more on coding and less on infrastructure management.</p>
<p><img src="https://d1.awsstatic.com/Product-Page-Diagram_AWS-Elastic-Beanstalk%402x.6027573605a77c0e53606d5264ec7d3053bf26af.png" alt="Diagram showing how AWS Elastic Beanstalk lets users create environments to upload and set up applications." /></p>
<h4 id="heading-pros-2">Pros:</h4>
<ul>
<li><p><strong>Simplicity:</strong> Abstracts away infrastructure details, making it easy for developers to deploy applications without worrying about underlying infrastructure configurations.</p>
</li>
<li><p><strong>Rapid Deployment:</strong> Well-suited for projects that require quick iterations and frequent updates, enabling rapid deployment of web applications.</p>
</li>
<li><p><strong>Automatic Scaling:</strong> Elastic Beanstalk provides automated scaling capabilities based on application demand.</p>
</li>
</ul>
<h3 id="heading-d-aws-amplify"><strong>D. AWS Amplify</strong></h3>
<p>AWS Amplify is a platform tailored for developing and deploying web and mobile applications seamlessly. It excels in scenarios where quick iterations, continuous deployment, and scalability are essential. Amplify simplifies the deployment process, making it an ideal choice for projects with a strong focus on frontend and mobile development.</p>
<p><img src="https://scontent.fktm3-1.fna.fbcdn.net/v/t1.6435-9/59422326_2134504869920200_2529035472991158272_n.jpg?_nc_cat=105&amp;ccb=1-7&amp;_nc_sid=7f8c78&amp;_nc_ohc=0K2VPneKpkwAX_P-ES5&amp;_nc_ht=scontent.fktm3-1.fna&amp;oh=00_AfCXt-YGBzkMIghsGaqPqz1ZWD8JkHHkxl9rVwy8eVfXXw&amp;oe=65B26346" alt /></p>
<h4 id="heading-pros-3">Pros:</h4>
<ul>
<li><p><strong>Streamlined Development:</strong> Simplifies the development and deployment of web and mobile applications, enabling developers to focus on code rather than infrastructure.</p>
</li>
<li><p><strong>Continuous Deployment:</strong> Integrates seamlessly with version control systems, allowing for continuous deployment and delivery.</p>
</li>
<li><p><strong>Backend Integration:</strong> Provides backend services, making it suitable for full-stack development and enabling developers to build scalable serverless applications.</p>
</li>
</ul>
<h3 id="heading-e-amazon-ecs-elastic-container-service"><strong>E. Amazon ECS (Elastic Container Service)</strong></h3>
<p>Amazon ECS facilitates container orchestration at scale. This method is preferred in microservices architectures, offering efficient management and deployment of containerized applications. ECS allows developers to run, stop, and manage Docker containers on a cluster of EC2 instances, providing flexibility and scalability for containerized workloads.</p>
<p><img src="https://docker.awsworkshop.io/images/docker-ecs-overview.png" alt="Docker" /></p>
<h4 id="heading-pros-4">Pros:</h4>
<ul>
<li><p><strong>Container Orchestration:</strong> Efficiently manages and deploys containerized applications at scale, ideal for microservices architectures.</p>
</li>
<li><p><strong>Integration with AWS Services:</strong> Seamless integration with other AWS services, enabling enhanced functionality for containerized applications.</p>
</li>
<li><p><strong>Customization:</strong> Provides flexibility in defining networking, security, and scaling configurations for containerized workloads.</p>
</li>
</ul>
<h3 id="heading-f-kubernetes-cluster-on-aws-eks"><strong>F. Kubernetes Cluster on AWS (EKS)</strong></h3>
<p>For those delving into the realm of Kubernetes, AWS provides Elastic Kubernetes Service (EKS). Kubernetes is ideal for orchestrating and managing containerized applications in a microservices architecture. EKS simplifies Kubernetes deployment, making it suitable for projects that demand the advanced orchestration capabilities of Kubernetes, especially in large-scale applications.</p>
<p><img src="https://docs.aws.amazon.com/images/eks/latest/userguide/images/what-is-eks.png" alt="
A basic flow diagram of the steps described previously.
" /></p>
<h4 id="heading-pros-5">Pros:</h4>
<ul>
<li><p><strong>Advanced Orchestration:</strong> Kubernetes offers advanced container orchestration capabilities, facilitating the management of complex microservices architectures.</p>
</li>
<li><p><strong>Community Support:</strong> Being an open-source platform, Kubernetes benefits from a vast and active community, ensuring continuous improvement and support.</p>
</li>
<li><p><strong>Multi-Cloud Deployment:</strong> EKS enables deploying Kubernetes clusters across multiple cloud providers, enhancing flexibility and avoiding vendor lock-in.</p>
</li>
</ul>
<h3 id="heading-g-serverless-deployments-with-aws-lambda"><strong>G. Serverless Deployments with AWS Lambda</strong></h3>
<p>The serverless architecture, exemplified by AWS Lambda, has gained prominence for its efficiency and cost-effectiveness. AWS Lambda allows developers to execute code without managing traditional server infrastructure. It is particularly favored in scenarios where event-driven, short-lived functions are sufficient, making it ideal for microservices architectures and applications with sporadic workloads.</p>
<p><img src="https://d1.awsstatic.com/product-marketing/Lambda/Diagrams/product-page-diagram_Lambda-RealTimeFileProcessing.a59577de4b6471674a540b878b0b684e0249a18c.png" alt="Diagram showing how AWS Lambda works. A photograph is taken, then uploaded to the S3 bucket. Lambda is triggered to run resizing code, and the photo is resized. " /></p>
<h4 id="heading-pros-6">Pros:</h4>
<ul>
<li><p><strong>Cost-Efficiency:</strong> Pay only for the compute time consumed, making it cost-effective for sporadically used functions and applications with variable workloads.</p>
</li>
<li><p><strong>Scalability:</strong> Scales automatically in response to incoming requests, ensuring optimal performance without the need for manual intervention.</p>
</li>
<li><p><strong>Event-Driven Architecture:</strong> Well-suited for event-driven architectures and microservices, allowing developers to focus on writing code rather than managing infrastructure.</p>
</li>
</ul>
<p>In this comprehensive exploration of AWS deployment diversity, it becomes evident that each method caters to specific use cases and preferences. Whether you seek fine-grained control, simplicity, or scalability, AWS provides a deployment solution tailored to your project's unique demands.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703654353239/b240fe0e-c571-48f8-82c7-809a5a28ae54.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Setting up Kubernetes cluster on AWS manually / onprem-VMs using Rancher kubernetes engine (Easy tutorial)]]></title><description><![CDATA[Introduction
Kubernetes has become the de facto standard for container orchestration, and setting up a Kubernetes cluster can be a crucial step in deploying and managing containerized applications. In this tutorial, we will guide you through the proc...]]></description><link>https://lajahshrestha.com.np/setting-up-kubernetes-cluster-on-aws-manually-onprem-vms-using-rancher-kubernetes-engine-easy-tutorial</link><guid isPermaLink="true">https://lajahshrestha.com.np/setting-up-kubernetes-cluster-on-aws-manually-onprem-vms-using-rancher-kubernetes-engine-easy-tutorial</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[setup]]></category><category><![CDATA[cluster]]></category><category><![CDATA[rancher]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Lajah Shrestha]]></dc:creator><pubDate>Sun, 19 Nov 2023 18:15:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704288526041/092adcf7-7be8-4ebf-9928-de1e760ebb74.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p>Kubernetes has become the de facto standard for container orchestration, and setting up a Kubernetes cluster can be a crucial step in deploying and managing containerized applications. In this tutorial, we will guide you through the process of manually setting up a Kubernetes cluster on AWS or on-premises VMs using Rancher Kubernetes Engine (RKE). This step-by-step guide will help you deploy a three-node cluster with one master and two agent nodes.</p>
<h3 id="heading-prerequisites"><strong>Prerequisites</strong></h3>
<p>Before we begin, make sure you have the following resources available:</p>
<ul>
<li><p>Instances: 3 (Server 1, Server 2, Server 3)</p>
</li>
<li><p>vCPUs: 4</p>
</li>
<li><p>Memory: 8 GB</p>
</li>
<li><p>Storage: 160 GB</p>
</li>
</ul>
<h3 id="heading-cluster-architecture"><strong>Cluster Architecture</strong></h3>
<ul>
<li><p><strong>k8s-1</strong>: Server 1 (Master node)</p>
</li>
<li><p><strong>k8s-2</strong>: Server 2 (Agent node)</p>
</li>
<li><p><strong>k8s-3</strong>: Server 3 (Agent node)</p>
</li>
<li><p><img src="https://file.notion.so/f/f/74341641-7150-4a30-b048-ed32fbd55682/27d5e8d3-bdef-4b87-834f-b758d800342c/Untitled.png?id=99ce9cc6-1282-4c45-8c2f-80294f8c67dc&amp;table=block&amp;spaceId=74341641-7150-4a30-b048-ed32fbd55682&amp;expirationTimestamp=1704636000000&amp;signature=aiuIN68k9AkrCMhcsAii35WXy6soPkSBNOfamdzZRL8&amp;downloadName=Untitled.png" alt /></p>
</li>
</ul>
<h2 id="heading-part-1-master-node-setup-k8s-1"><strong>Part-1: Master Node Setup (k8s-1)</strong></h2>
<h3 id="heading-disable-firewall-and-install-rke"><strong>Disable Firewall and Install RKE</strong></h3>
<pre><code class="lang-plaintext">sudo su

# Disable firewall
systemctl disable --now ufw

# Update and install required packages
apt update
apt install nfs-common -y
apt upgrade -y
apt autoremove -y

# Install RKE2
curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.26 INSTALL_RKE2_TYPE=server sh -
systemctl enable --now rke2-server.service
</code></pre>
<h3 id="heading-configure-kubectl-and-check-node-status"><strong>Configure kubectl and Check Node Status</strong></h3>
<pre><code class="lang-plaintext"># Symlink kubectl
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl

# Add kubectl configuration
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml

# Check node status
kubectl get node
</code></pre>
<p><img src="https://file.notion.so/f/f/74341641-7150-4a30-b048-ed32fbd55682/0fb73fcb-ebc2-4fb5-b13b-f93fe8f741f5/Untitled.png?id=5db26dec-92eb-4f80-bb6e-70d7a8b48bf4&amp;table=block&amp;spaceId=74341641-7150-4a30-b048-ed32fbd55682&amp;expirationTimestamp=1704636000000&amp;signature=UH9vZ42Gg2DVmBDKkIFCw65EO5gKqRpNDGphlr_J6iw&amp;downloadName=Untitled.png" alt /></p>
<h3 id="heading-obtain-node-token-for-agent-nodes-to-connect-with-master-node"><strong>Obtain Node Token for agent nodes to connect with master node</strong></h3>
<p><code>cat /var/lib/rancher/rke2/server/node-token</code></p>
<p>if in he case of fault and you require to reinstall the rke you may uninstall using the command: <code>bash /usr/local/bin/</code><a target="_blank" href="http://rke2-uninstall.sh"><code>rke2-uninstall.sh</code></a> and then repeat the initial setup steps.</p>
<hr />
<h2 id="heading-part-2-slave-nodes-setup-k8s-2-and-k8s-3"><strong>Part-2: Slave Nodes Setup (k8s-2 and k8s-3)</strong></h2>
<h3 id="heading-disable-firewall-and-install-rke-1"><strong>Disable Firewall and Install RKE</strong></h3>
<pre><code class="lang-plaintext"># Disable firewall
systemctl disable --now ufw

# Update and install required packages
apt update
apt install nfs-common -y
apt upgrade -y
apt autoremove -y
</code></pre>
<h3 id="heading-add-configuration-for-vms-2-and-3"><strong>Add Configuration for VMs 2 and 3</strong></h3>
<pre><code class="lang-plaintext"># Export rancher1 IP and token
export RANCHER1_IP=10.0.4.196  # Change this!
export TOKEN=&lt;TOKEN_FROM_SERVER_1&gt;  # Change this as well.

# Install RKE2 as agent
curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.26 INSTALL_RKE2_TYPE=agent sh -

# Create config file
mkdir -p /etc/rancher/rke2/
echo "server: https://$RANCHER1_IP:9345" &gt; /etc/rancher/rke2/config.yaml
echo "token: $TOKEN" &gt;&gt; /etc/rancher/rke2/config.yaml

# Enable and start
systemctl enable --now rke2-agent.service
</code></pre>
<p>Edit the configuration file (<code>vim /etc/rancher/rke2/config.yaml</code>) similarly for both Server 2 and Server 3.</p>
<h3 id="heading-start-rke2-services-on-slave-nodes"><strong>Start RKE2 Services on Slave Nodes</strong></h3>
<pre><code class="lang-plaintext">bashCopy code# Master Node (k8s-1)
systemctl enable rke2-server.service
systemctl start rke2-server.service
systemctl restart rke2-server.service
systemctl status rke2-server.service

# Agent Nodes (k8s-2 and k8s-3)
systemctl enable rke2-agent.service
systemctl start rke2-agent.service
systemctl restart rke2-agent.service
systemctl status rke2-agent.service
</code></pre>
<p><img src="https://file.notion.so/f/f/74341641-7150-4a30-b048-ed32fbd55682/6c510c39-afab-49bd-aa2e-a6a721e4c751/Untitled.png?id=ddaee96b-c069-4028-80d6-ebcc2216c926&amp;table=block&amp;spaceId=74341641-7150-4a30-b048-ed32fbd55682&amp;expirationTimestamp=1704636000000&amp;signature=zecq2zZttolwxAjDvSyuwjZiqwL3Mxed6Sb5OUQVZD0&amp;downloadName=Untitled.png" alt /></p>
<h3 id="heading-check-node-connection"><strong>Check Node Connection</strong></h3>
<pre><code class="lang-plaintext">bashCopy codekubectl get nodes -o wide -w
</code></pre>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F74341641-7150-4a30-b048-ed32fbd55682%2F8126024a-f297-41ac-8093-d2ff77bbf32e%2FUntitled.png?table=block&amp;id=eebb1d7c-3970-419e-9c19-7bc2244f8eb0&amp;spaceId=74341641-7150-4a30-b048-ed32fbd55682&amp;width=2000&amp;userId=f42432f0-b568-4582-93b3-81901802afea&amp;cache=v2" alt /></p>
<h2 id="heading-setting-up-rancher"><strong>Setting up Rancher</strong></h2>
<h3 id="heading-install-helm-and-add-repositories"><strong>Install Helm and Add Repositories</strong></h3>
<pre><code class="lang-plaintext">bashCopy code# Install Helm
curl -#L https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Add Helm repositories
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo add jetstack https://charts.jetstack.io
</code></pre>
<h3 id="heading-configure-domain-and-install-cert-manager"><strong>Configure Domain and Install Cert-Manager</strong></h3>
<pre><code class="lang-plaintext">bashCopy code# Install cert-manager
helm upgrade -i cert-manager jetstack/cert-manager -n cert-manager --create-namespace --set installCRDs=true
</code></pre>
<h3 id="heading-install-rancher-with-custom-domain"><strong>Install Rancher with Custom Domain</strong></h3>
<pre><code class="lang-plaintext">bashCopy code# Install Rancher
helm upgrade -i rancher rancher-latest/rancher --create-namespace --namespace cattle-system --set hostname=&lt;yourdomain&gt;--set bootstrapPassword=bootStrapAllTheThings --set replicas=1
</code></pre>
<p>Here I have mapped my custom domain with the public IP of Master VM using <strong>AWS Route53</strong></p>
<p>Now if you access the domain you should obtain rancher UI.</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F74341641-7150-4a30-b048-ed32fbd55682%2F0d1a7e44-8caa-4599-bc18-122b44998673%2FUntitled.png?table=block&amp;id=40a577e2-12d1-4c38-91af-e099c6f67cdb&amp;spaceId=74341641-7150-4a30-b048-ed32fbd55682&amp;width=2000&amp;userId=f42432f0-b568-4582-93b3-81901802afea&amp;cache=v2" alt /></p>
<p>You shall login using the bootstrap password using the one that you used during installation command. The site will be self certified once logged in for the first time.</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F74341641-7150-4a30-b048-ed32fbd55682%2F3282924a-9248-4f59-ac98-974bbcc3697a%2FUntitled.png?table=block&amp;id=573471f1-43c2-4546-aebc-bbe803bf578b&amp;spaceId=74341641-7150-4a30-b048-ed32fbd55682&amp;width=2000&amp;userId=f42432f0-b568-4582-93b3-81901802afea&amp;cache=v2" alt /></p>
<p>Congratulations! You have successfully set up a Kubernetes cluster on AWS or on-premises VMs using Rancher Kubernetes Engine (RKE). You can now access Rancher using the specified domain and bootstrap password.</p>
<p><strong>Architecture Diagram:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704548301641/a92d0d7e-8a35-4111-abbf-9d47340d74e7.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Deploy a Simple web application in EC2 using NGINX and certify using Certbot]]></title><description><![CDATA[Setup a barebone Ubuntu Server in AWS
Requirements:

Must use a imported ssh key (Use ed25519 key)(done)

should have a elastic IP attached (done)

Must allow ssh key pair


Setup a Webserver
Domain to use: custom domain name(if you have)
Must have S...]]></description><link>https://lajahshrestha.com.np/deploy-a-simple-web-application-in-ec2-using-nginx-and-certify-using-certbot</link><guid isPermaLink="true">https://lajahshrestha.com.np/deploy-a-simple-web-application-in-ec2-using-nginx-and-certify-using-certbot</guid><dc:creator><![CDATA[Lajah Shrestha]]></dc:creator><pubDate>Tue, 09 May 2023 18:15:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708184310414/eac012af-ea53-4334-98c8-2d2a7ae836b7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Setup a barebone Ubuntu Server in AWS</strong></p>
<p>Requirements:</p>
<ol>
<li><p>Must use a imported ssh key (Use ed25519 key)(done)</p>
</li>
<li><p>should have a elastic IP attached (done)</p>
</li>
<li><p>Must allow ssh key pair</p>
</li>
</ol>
<p>Setup a Webserver</p>
<p>Domain to use: custom domain name(if you have)</p>
<p>Must have SSL/TLS enabled (Use certbot for this) <a target="_blank" href="https://certbot.eff.org/">Certbot</a></p>
<p><a target="_blank" href="https://certbot.eff.org/favicon.ico">https://certbot.eff.org/favicon.ico</a></p>
<p>Must use NGINX as a webserver (done)</p>
<p>References:</p>
<p><a target="_blank" href="https://www.unixtutorial.org/how-to-generate-ed25519-ssh-key/">How To Generate ed25519 SSH Key</a></p>
<p><a target="_blank" href="https://www.unixtutorial.org/favicon.ico">https://www.unixtutorial.org/favicon.ico</a></p>
<hr />
<ol>
<li><p><strong>Generating ssh key in ubuntu server</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708133823593/2c0f2955-1c89-4751-9ce9-b5de1ed29f51.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Importing ssh key</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708134025351/85d48444-5508-4c43-aad1-9e5bac9c6acd.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Deleting the Existing ssh key and updating with the self generated key pair (imported) using VIM editor</strong><code>.ssh/authorized_keys</code> file on the ubuntu instance.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708134137045/427746f1-5a82-439e-b61b-4c427d0b924a.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p><em>passphrase: mercantilesshpassphrase</em></p>
<ol>
<li><p><strong>Adding additional ssh key pair</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708134169032/cc154140-be4b-4e53-84c1-168206dd4ea3.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708134186467/71ea3461-2267-4180-9fbe-840b325221fa.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Attaching Elastic IP</strong></p>
</li>
</ol>
<p><strong>Elastic IP</strong></p>
<p>An <em>Elastic IP address</em> is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is allocated to your AWS account, and is yours until you release it. By using an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. Alternatively, you can specify the Elastic IP address in a DNS record for your domain, so that your domain points to your instance. For more information, see the documentation for your domain registrar, or <a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dynamic-dns.html">Set up dynamic DNS on your Amazon Linux instance</a>.</p>
<p>An Elastic IP address is a public IPv4 address, which is reachable from the internet. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the internet. For example, this allows you to connect to your instance from your local computer.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708134257674/44f0125c-d1c2-42d0-9eb4-085937aee07e.png" alt class="image--right mx-auto mr-0" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708134269465/335230a0-60cb-44f1-b97b-ad52dd663bc9.png" alt class="image--center mx-auto" /></p>
<p><em>Elastic IP: 13.200.34.252</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708134297423/7bc6cf0c-1383-4d1d-8cd3-d5b4a48d2e7e.png" alt class="image--center mx-auto" /></p>
<ol>
<li><strong>Installing Certbot</strong></li>
</ol>
<pre><code class="lang-jsx">sudo apt-get install certbot python3-certbot-nginx
</code></pre>
<ol>
<li><p><strong>Route S3 for personal Domain</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708183381305/0122be72-7a84-4e65-aa88-f7ffe425b4fb.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>Creating a custom record in route 53 for domain</p>
<p><a target="_blank" href="http://www.lajahmercantile.test.mercantilecloud.com.np/">www.lajahmercantile.test.mercantilecloud.com.np</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708183395750/68306aa2-190c-46c0-9077-c1a9cda57725.png" alt class="image--center mx-auto" /></p>
<p>Route53 successfully redirecting to the instance that is hosting webpage using Nginx server</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708183411788/51ab2a7a-f3c7-42e5-b6bf-545b1cf14fbf.png" alt class="image--center mx-auto" /></p>
<ol>
<li><strong>Managing Website Files</strong></li>
</ol>
<p>Cloning the static website example into the instance working repository</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708183703194/94182ee9-f848-4e2d-b35e-352bf890b21c.png" alt class="image--center mx-auto" /></p>
<p>Copying the website files to the <code>/var/www/lajahmercantile@mercantilecloud.com.np</code> directory</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708183715900/0eff9d73-2888-48dd-a8a0-76054df63e44.png" alt class="image--center mx-auto" /></p>
<ol>
<li><strong>Configuring NGINX to serve my static website:</strong></li>
</ol>
<p>→Modifying the default configuration file <code>/etc/nginx/sites-enable/default</code> for the server to locate our custom website</p>
<pre><code class="lang-jsx">sudo vim <span class="hljs-keyword">default</span>
</code></pre>
<p>→ changing the default root location to the location containing out sample html file.</p>
<ol>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708183732085/431b0ade-7fc9-4041-91f1-062842913f34.png" alt class="image--center mx-auto" /></p>
<p> <strong>Restarting Nginx</strong></p>
</li>
</ol>
<pre><code class="lang-jsx">sudo systemctl restart restart nginx
</code></pre>
<ol>
<li><strong>Verifying the changes</strong></li>
</ol>
<p>Sample website Sucessfully hosted on nginx server sunning on Ubuntu webserver on custom domain created via Route 53</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708183739596/30f0f19b-08ee-4c02-a2df-086b15392f5a.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item></channel></rss>