devops-page Archives - Indium https://www.indiumsoftware.com/blog/tag/devops-page/ Make Technology Work Wed, 22 May 2024 07:14:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.indiumsoftware.com/wp-content/uploads/2023/10/cropped-logo_fixed-32x32.png devops-page Archives - Indium https://www.indiumsoftware.com/blog/tag/devops-page/ 32 32 Realtime Container Log-aggregation and Centralized monitoring solutions https://www.indiumsoftware.com/blog/realtime-container-log-aggregation-and-centralized-monitoring-solutions/ Wed, 01 Feb 2023 11:25:18 +0000 https://www.indiumsoftware.com/?p=14332 Business owners expect their applications to be highly available with zero downtime, for example, Banking and trading platforms, which deal with multicurrency transactions, to be available 24/7. Realtime monitoring is essential for maintaining 100% uptime and ensuring RPO. Organizations want surveillance solutions that can monitor and publish data as it is processed. Payment gateways are

The post Realtime Container Log-aggregation and Centralized monitoring solutions appeared first on Indium.

]]>
Business owners expect their applications to be highly available with zero downtime, for example, Banking and trading platforms, which deal with multicurrency transactions, to be available 24/7. Realtime monitoring is essential for maintaining 100% uptime and ensuring RPO.

Organizations want surveillance solutions that can monitor and publish data as it is processed.

Payment gateways are used to authenticate financial transactions and to publish the success/failure status after they have been completed. A transaction status is required for EOD billing.

Establishing centralised monitoring and alerting mechanisms necessitates a thorough examination of application and system level logs. If an incident occurs, all parties involved will be notified via message/email/dashboard so that the affected teams can respond immediately.

This article will go over the log aggregation process for containerized applications running in a Kubernetes cluster.

Business Case

One of our clients approached Indium for improved visibility of their log aggregation and System Metrics visualisation. The client has over 100 applications running in a variety of environments. As a result, proactive monitoring and maintaining 100% uptime of business-critical applications became nearly impossible. They also had to manually search through multiple text filters for CloudWatch metrics. This was a time-consuming and labour-intensive process. There was also the possibility of avoiding outages that could result in SLA violations.

As a result, the customer’s top priority was to monitor these business applications centrally and effectively.

On the market, there are numerous surveillance options. Traditionally, the NOC team performs monitoring and incident response. In such cases, human intervention is required, and there is a risk of missing an incident or responding too slowly. For automated monitoring mechanisms, the ELK stack is frequently used. This saves time and money by reducing manual intervention.

The ELK Stack assists users by providing a powerful platform that collects and processes data from multiple data sources, stores that data in a centralised data store that can scale as data grows, and provides a set of tools for data analysis. All of the aforementioned issues, as well as operating system logs, NGINX, IIS server logs for web traffic analysis, application logs, and AWS logs, can be monitored by a log management platform (Amazon web services).

To know more about Indium’s AWS practice and how we can help you

Click Here

Log management enables DevOps engineers and system administrators to make more informed business decisions. As a result, log analysis using Elastic Stack or similar tools is critical.

The diagram below depicts the ELK stack workflow and log flow transmission.

Business Need & Solution delivered

  • The client lacked a warning system to prevent the application from failing. The ELK server recently crashed due to heavy load, and the affected team was unaware of the incident for three days.
  • This mechanism for ELK server and application alerts was proposed and implemented by the Indium team.
  • To avoid future failures, we wrote our serverless computing code in Python and deployed it to our customer’s infrastructure via AWS Lambda functions.
  • When a pod in the Kubernetes cluster fails, the event trigger will be triggered.

The Lambda function monitors health and notifies affected teams via email. We also offered the solution in the form of an email notification of Kubernetes pod resource utilisation, such as CPU, memory, and network utilisation. Elasticsearch DSL queries and metric thresholds were used to configure these notification emails. If any of these system resources become unavailable, the client will be notified via email. The Indium team used the ELK stack to deploy a centralised monitoring solution. We created a dashboard for each environment that displays the metrics that are being used.

You might be interested in: AWS Lambda to Extend and Scale Your SaaS Application

Below is an example of how the metrics utilization is being captured and notified.

  • Created the alert name in the Elasticsearch for e.g. [metrics-prod-CPU Utilization]
  • Set Trigger event for every 1 minute
  • Configured the conditions:

                  WHEN Max OF valu.max is above 90%.

  • Added Filters as mentioned below:

              metric_name.Keyword: “CPUUtilization” and namespace.keyword: “AWS/EC2”.

  • Created a group alert
    • _index
    • InstanceID
    • Metric_name
    • Region. Keyword
  • Created Email Connector: [Connector Name]
  • Configured Sender email. Alerts will be sent using this DL
    • Added service – Outlook
    • Host Name
    • Port: 25
  •  
  • Created the Alert subject as [ALERT]: [PROD]: High CPU usage detected!!
  • Below Conditions will be checked to display along with alerts:

            – {{alertName}}

            – {{context.group}} is in a state of {{context.alertState}}

            – Reason: {{context.reason}}

            – Routed the ELK link of the corresponding dashboard.

We have also used the Elasticserach query DSL for alerts configuration as below mentioned.

  • Created the alert name for e.g.  [metrics-dev-CPUUtilization].
  • Set Trigger event for every 1 minute.              
  • Select the index metrics_dev and set size as 100.
  • Query to capture data for required metrics:

{

     “query”:

      {

      “bool”: {

              “filter”: [

                    {“match_phrase”: {“namespace: “AWS/EC2”}},

                     {“match_phrase”: {“metric_name”: “CPUUtilization”}},

                     {“range”: {“value.max”: {“gte”:90}}}]

          }

   }

}

  • Configured the conditional statements
    • If the metrics utilization is above 90%
    • If the utilization persists more than 5 minutes
    • If both conditions are satisfied it will send an alert email
  • Added the [Connector Name] in Actions [created before]
    • Run when – QUERY MATCHED
  • Configured the Email Recipients to receive the alerts notification
    • Created the Alert subject as [ALERT]: [DEV]: High CPU usage detected!!
  • Added below Conditions to display along with alerts:
    • {{alert Name}}
    • {{context.group}} is in a state of {{context.alertState}}
    • Reason: {{context.reason}}
    • Routed ELK link of the corresponding dashboard

We successfully configured all of the dashboards, and our customers are using them for real-time monitoring. The Indium team accomplished this in a short period of time, namely four months. Benefits of the solution include lower costs and less manual labour.

If you want more information or want to know how we do it, contact us. We are here to assist you.

Benefits

The customer benefits from the use of the centralised notification method. Here are a few standouts.

  • The customer now has a Centralized Monitoring Dashboard through which they can receive resource utilisation and incident notifications via email.
  • 75% less manual effort than the previous method of refreshing the Cloud Watch console every few minutes to see the logs.
  • With the Kibana dashboards in place, this centralized dashboard provided a unified view of logs collected from multiple sources.
  • The TAT and MTTR (Meantime to resolve) for incident resolutions have been reduced as a result of this.
  • Opensource Stack was used entirely to create a low-cost monitoring solution.

The post Realtime Container Log-aggregation and Centralized monitoring solutions appeared first on Indium.

]]>
Creating Scalable CI/CD Pipelines for DevOps with Various AWS Developer Tools https://www.indiumsoftware.com/blog/ci-cd-pipelines-for-devops-with-aws-developer-tools Mon, 01 Aug 2022 10:19:19 +0000 https://www.indiumsoftware.com/?p=10832 Businesses need to accelerate the delivery of applications and services to improve customer experience and gain a competitive advantage. The traditional waterfall development method is slow because of which DevOps services has gained popularity. This enables businesses to shorten the product development lifecycle and update products faster. Starling Bank is a UK-based mobile-only bank offering

The post Creating Scalable CI/CD Pipelines for DevOps with Various AWS Developer Tools appeared first on Indium.

]]>
Businesses need to accelerate the delivery of applications and services to improve customer experience and gain a competitive advantage. The traditional waterfall development method is slow because of which DevOps services has gained popularity. This enables businesses to shorten the product development lifecycle and update products faster.

Starling Bank is a UK-based mobile-only bank offering innovative, seamless financial services such as real-time payment visibility and contactless debit cards. The app is at the core of all its banking operations, connecting customers to the bank and enabling them to conduct transactions without glitches. Therefore, ensuring that it works every time was very essential for the bank.

The company used AWS continuous integration/continuous delivery/deployment (CI/CD) with the DevOps approach to enable fast testing and scaling capabilities. This enabled Starling to address any bugs before they could impact customers.

To know more about how we can help you create scalable CI/CD pipelines using AWS Developer Tools, contact us now

Get in touch

CI/CD solutions is integral to the DevOps work environment. It automates the complete workflow of updating software or application from the time of building to testing, packaging, and deploying. In other words, CI/CD is a pipeline a new code is created on one end, tested as it progresses through the stages of source, build, test, staging, and production, and then moved to the production environment.

Benefits of a CI/CD Pipeline

Using a CI/CD pipeline enables the frequent release of new features and updates based on inputs from monitoring the app’s performance and feedback from customers. It reduces the risk of errors due to human intervention by automating the process. A structured, automated process also improves productivity. Using a microservices approach helps with releasing components independent of each other.

The second advantage is improving the code’s quality since, at each stage of the CI/CD pipeline, the code is tested and verified. Whenever a problem is identified, the code does not progress and is sent back for debugging to the team.

This article might give you an extended list of benefits of implementing CI/CD solutions: CI/CD- The Advantage You Didn’t Know You Needed

Use Cases of CI/CD Workflows in AWS

Some of the production-ready CI/CD services on AWS include:

Deploying Dockerized Microservices: It is easy to industrialize dockerized microservices with the AWS architecture.

Serverless Functions: A highly resilient and fault-tolerant CI/CD pipeline can be set up for automating the deployment process of Lambda-based serverless applications.

Machine Learning Pipelines: Machine learning models can be developed, trained, tested, deployed, managed, and monitored in AWS in a cloud-based environment.

AWS Developer Tools to Create Scalable CI/CD Pipelines

Amazon Web Server (AWS) provides CI/CD developers with tools for accelerating software development and shortening release cycles. These flexible services enable a scalable CI/CD pipeline by

  • (i).Simplifying provisioning and infrastructure management
  • (ii).Deploying application code
  • (iii).Automating the processes for software release
  • (iv).Monitoring the performance of the application and the infrastructure

The scalable developer tools for CI/CD pipelines include:

CodePipeline: This helps to automate the building, testing, and deploying stages of the release process whenever a code needs a change. It uses a defined release model that facilitates delivering features and updates rapidly and reliably. The code pipelines can also integrate with other services such as AWS Services, including Amazon Simple Storage Service (Amazon S3), and third-party products such as GitHub. Some of the use cases that AWS CodePipeline can address include:

(i).Code compilation, building, and testing using AWS CodeBuild

(ii).Delivering container-based applications continuously to the cloud

(ii).Validation of artifacts, including descriptors and container images, that are for network service or functions specific to cloud-native network pre-deployment

(iii).Testing functionality, performance, baseline, regression and integration for containerized network function/virtual network function (CNF/VNF)

(v).Testing for reliability and disaster recovery (DR)

CodeCommit: CodeCommit is a fully-managed source control service hosting secure repositories based on Git. It provides a secure and highly scalable ecosystem that facilitates collaboration on code. This solution uses CodeCommit to create a warehouse to store the application and deployment codes.

CodeBuild: CodeBuild can be used to build and test the code in a fully managed continuous integration service. The source code is compiled, tested, and ready-to-deploy software packages are deployed on a build server that is created dynamically.

CodeDeploy – The code or application is deployed onto a set of EC2 instances by a fully managed deployment service that uses CodeDeploy. It runs CodeDeploy agents to automate software deployments to several compute services, including Amazon EC2, AWS Lambda, AWS Fargate, or even on-premises servers.

CodePipeline: Quick and reliable update of application and infrastructure is made possible by a fully managed continuous delivery service that automates the release pipelines. To build an end-to-end pipeline, it leverages CodePipeline to fetch the application code from CodeCommit, build and test using CodeBuild, and deploy using CodeDeploy.

CloudWatch Events: The CodePipeline on Git is triggered by an AWS CloudWatch Events rule to commit to the repository on CodeCommit.

A must read: Using Kubernetes to Run CI/CD Pipelines in Large-scale, Cloud-native Applications

Amazon Simple Storage Service (Amazon S3): Objects can be stored in this industry-leading scalable storage service that ensures data availability, performance, and security. S3 buckets are used for storing the build and deployment artifacts that were created during the pipeline run.

AWS Key Management Service (AWS KMS): Cryptographic keys can be created easily using AWS KMS. It also allows easy management and control of their use across several AWS services and applications. The build and deployment artifacts in the S3 bucket are also encrypted at rest.

Indium–AWS Partner

Indium Software is an AWS partner with long years of experience in DevOps and CI/CD.  Our team of AWS experts leverages its set of pre-fabricated toolsets to break barriers to innovation and enable agile development of applications and software. We have cross-domain expertise, which helps us to understand the use cases and design and build software with assured outcomes. We have a proven track record, with more than 250 deployments of CI/CD deployments in AWS, in aligning business strategic goals with cloud platform capabilities.

Our AWS capabilities span consulting, system integration, and industry solutions, as we empower our clients to speed up their digital transformation journey.

The post Creating Scalable CI/CD Pipelines for DevOps with Various AWS Developer Tools appeared first on Indium.

]]>
DevOps and Its Role in Cloud Deployment https://www.indiumsoftware.com/blog/devops-and-its-role-in-cloud-deployment/ Mon, 22 Nov 2021 07:23:31 +0000 https://www.indiumsoftware.com/?p=7860 In a way, DevOps and Cloud serve similar purposes. Both promote automation, enhance the speed of development and deployment, facilitate communication and collaboration and so, it can be said that the two go hand-in-hand. DevOps, a combination of the terms ‘development’ and ‘operations’, stands for the collaborative or shared approach taken by the application and

The post DevOps and Its Role in Cloud Deployment appeared first on Indium.

]]>
In a way, DevOps and Cloud serve similar purposes. Both promote automation, enhance the speed of development and deployment, facilitate communication and collaboration and so, it can be said that the two go hand-in-hand.

DevOps, a combination of the terms ‘development’ and ‘operations’, stands for the collaborative or shared approach taken by the application and IT operations team to perform their tasks. It facilitates greater communication and collaboration between teams while enabling iterative development of software, automating the process, and enabling programmable infrastructure deployment and maintenance.

DevOps speeds up the process of software development by 50%. In combination with the cloud, it is known to be faster by 80%.

Cloud computing provides DevOps automation with a centralized platform for testing, deployment, and production. It reduces the number of leveraged resources that need to be accounted for — while tracking costs and making adjustments as required. Broadly speaking, the combination of cloud deployment and DevOps increases efficiency and cost-effectiveness of digital transformation efforts.

Bringing Agility to Cloud Adoption

GitLab’s Fifth Annual Global DevSecOps Survey reveals that DevOps led to 2x faster release of code by 60% of developers. Of the 4,300 respondents surveyed from across the globe, a majority experienced a dramatically faster pace of technology adoption due to DevOps and 84% of developers reported faster code release than ever before. As against 8% in 2020, 55% of operations teams reported that their life cycles were either completely or mostly automated.

To know more about how Indium can help you speed your digital transformation by using DevOps for cloud deployment

Contact Us Now

Apart from speed, some of the other advantages include:

Ease of Automation: DevOps services can simplify the automation of infrastructure management on the cloud as it enables server management, OS patching, implementing CI/CD for deployment automation, testing, and report generation.

Replication of Cloud Server: Using DevOps, the complex replication process can be automated. Also, whenever a backup is taken, the servers have to be launched manually. This can be automated with DevOps, which can also provide tools to define the hierarchy/pattern of the infrastructure and define inter-communication patterns.

DevOps Orchestration: DevOps orchestration tools such as Ansible, Chef, and Puppet facilitate the complete coordination and control of automation of the entire hierarchy in the infrastructure. By integrating with the cloud, functions such as automated server provisioning and auto-scaling become possible.

Effective Monitoring: Cloud monitoring typically involves receiving an email alert in case of any infrastructural assets deviating from the norm. By integrating with DevOps, it is possible to customize trigger alarms and other alerts enabling better utilization of resources.

Rapid Deployment: As discussed earlier, speed is one of the greatest advantages of DevOps. When clubbed with the cloud, rapid deployment becomes simpler as it helps with solving infrastructural problems using the latest tools and building custom logic and writing capabilities. The whole process can be automated and error-free using single-click build tools that interact with the cloud services.

To ensure the successful integration of DevOps and Cloud, some of the prerequisites include:

● Defining your development requirements by assessing your current needs and future roadmap.

● Define the business case and the expected ROI.

● Define the initial DevOps processes, solutions, and the target cloud platform or platforms.

● Take a top-down approach to embrace the cloud as well as DevOps if both are new to the organization to get the buy-in of the resources.

You might be interested in: https://www.indiumsoftware.com/blog/process-of-devops-on-a-cloud-platform/

Best Practices to Make DevOps Possible for Cloud Deployment

To leverage DevOps for cloud deployment, both need to be understood and implemented well and decisions regarding DevOps tools and cloud platforms should be done in an integrated way. It also requires a change in the development culture of the enterprise and needs corresponding budgets.

Some of the best practices to ensure the successful use of DevOps for Cloud Deployment include:

● Improve teamwork and communication by connecting the workflows of development and operations teams using automation.

● An application’s technology stack can be managed and provisioned using techniques such as continuous integration or version control.

● Any problems that arise can be handled by managing logs and monitoring application and infrastructure performance.

● Automate tests and builds by combining code edits within a shared archive on a routine basis.

● Microservices architecture makes it easier to build and manage applications and adding functionalities quickly.

Indium — To Enable DevOps in Cloud Deployment

Indium Software is a rapidly growing technology services company with deep digital engineering expertise across Cloud Engineering, Data and Analytics, DevOps, Application Engineering, and Digital Assurance.

We bring to the fore deep expertise and experience in DevOps as well as cloud and can speed up your digital transformation efforts by integrating the two efficiently. Indium’s comprehensive set of DevOps services facilitates high-quality throughput of new capabilities and covers:

CI/CD Services

Create block-free code pipelines that ensure smooth value stream – from development, integration, testing to deployment

Deployment Automation

Contain the complexity of deployment with automation, freeing up time for value-added tasks

Containerization

With packaged executables, build anywhere, deploy anywhere with confidence.

The post DevOps and Its Role in Cloud Deployment appeared first on Indium.

]]>