managed-analytics-page Archives - Indium https://www.indiumsoftware.com/blog/tag/managed-analytics-page/ Make Technology Work Fri, 26 Apr 2024 12:52:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.indiumsoftware.com/wp-content/uploads/2023/10/cropped-logo_fixed-32x32.png managed-analytics-page Archives - Indium https://www.indiumsoftware.com/blog/tag/managed-analytics-page/ 32 32 Machine Learning using Google’s Vertex AI https://www.indiumsoftware.com/blog/machine-learning-using-googles-vertex-ai/ Thu, 02 Feb 2023 10:38:31 +0000 https://www.indiumsoftware.com/?p=14347 Image by Google What is Vertex AI? “Vertex AI is Google’s platform which provides many Machine learning services such as training models using AutoML or Custom Training.” Image by Google Features of Vertex AI We use Vertex AI to perform the following tasks in the ML workflow To know the workflow of Vertex AI we

The post Machine Learning using Google’s Vertex AI appeared first on Indium.

]]>
Image by Google

What is Vertex AI?

“Vertex AI is Google’s platform which provides many Machine learning services such as training models using AutoML or Custom Training.”

Image by Google

Features of Vertex AI

We use Vertex AI to perform the following tasks in the ML workflow

  • Creation of dataset and Uploading data
  • Training ML model
  • Evaluate model accuracy
  • Hyperparameters tuning (custom training only)
  • Storing model in Vertex AI.
  • Deploying trained model to endpoint for predictions.
  • Send prediction requests to endpoint.
  • Managing models and endpoints.

To know the workflow of Vertex AI we will train a Classification model “Dogs vs Cat” using Vertex AI’s AutoML feature.

Step 1: Creating Dataset

We will download the dataset from Kaggle. In the downloaded zip file there are two zip files train.zip and test.zip. Train.zip contains the labelled images for training.

There are about 25,000 images in the train.zip file and 12,500 in the test.zip file. For this project we will only use 200 cat and 200 dog images to train. We will use the test set to evaluate the performance of our model.

After extracting the data, I uploaded the images to the google cloud storage bucket called dogs_cats_bucket1 which I have created at us-central1 region. Images are stored in two folders train and test in the bucket.

Best Read: Top 10 AI Challenges

Now we need to create a csv file with the images address and label for that I have written the following lines of code.

from google.cloud import storage

import pandas as pd

import os

#Authentication using service account.

os.environ[‘GOOGLE_APPLICATION_CREDENTIALS’] =”/content/dogs-vs-cats-354105-19b7b157b2b8.json”

BUCKET=’dogs_cats_bucket1′

DELIMITER=’/’

TRAIN_PREFIX=’train/’

TRAIN_BASE_PATH = f’gs://{BUCKET}/{TRAIN_PREFIX}’

print(“Starting the import file generation process”)

print(“Process Details”)

print(f”BUCKET : {BUCKET}”)

storage_client = storage.Client()

data = []

print(“Fetchig list of Train objects”)

train_blobs = storage_client.list_blobs(BUCKET, prefix=TRAIN_PREFIX, delimiter=DELIMITER)

for blob in train_blobs:

label = “cat” if “cat” in blob.name else “dog”

full_path = f”gs://{BUCKET}/{blob.name}”

data.append({

‘GCS_FILE_PATH’: full_path,

‘LABEL’: label

})

df = pd.DataFrame(data)

df.to_csv(‘train.csv’, index=False, header=False)

After running the script on Jupyter Notebook, we have the required csv file, we will upload the file to the same storage bucket as well.

Now in the Vertex AI section go to Datasets and enable the Vertex AI API.

Click Create Dataset and name it. I have named it cat_dog_classification. We will select Image Classification (Single-label). Make sure the region is us-central1. Hit Create.

In the next section mark Select import files from Cloud Storage and select the train.csv from Browse. Hit Continue

 

Vertex AI tool 16 minutes to import data. Now we can see the data the Browse and Analyse tab.

 

Now we can train the model.

Step 2: Model Training

Go to Vertex AI, then to Training section and click Create. Make sure the region is us-central1.

In the Dataset select cat_dog_classification and keep default for everything else with Model Training Method as AutoML.

Click continue for the Model Details and Explainability with the default settings.

For Compute and Pricing give 8 maximum node hours.

Hit Start Training.

 

The model training is completed after 29 mins.

Step 3: Model Evaluation

By clicking on trained model, it will take us to the model stats page. Where we have stats like Precision-recall curve, Precision-recall by threshold and Confusion matrix.

With the above stats the model looks good.

Step 4: Model Deployment

Go to Vertex AI, then to the Endpoints section and click Create Endpoint. Make sure the region is us-central1.

Give dogs_cats as the name of Endpoint and click Continue.

In the Model Settings select cat_dog_classification as Model NameVersion 1 as Version and 2 as number of compute nodes.

Click Done and Create.

It takes about 10 minutes to deploy the model.

With this our model is deployed.

Step 5: Testing Model

Once the model is deployed, we can test the model by uploading the test image or creating Batch Prediction.

To Test the Model, we go to the Deploy and Test section on the Model page.

Click on the Upload Image to upload the test, Image.

With this we can see our model is working good on test images.

We can also connect to the Endpoint using Python and get the results.

For more details on our AI and ML services

Visit this link

This is the end of my blog. We have learned how to train an image classification model on Google’s Vertex AI using Auto ML feature. I have enjoyed every minute while working on it.

For the next article we will see how to train custom model on Vertex AI with TensorFlow.

Stay Tuned.

The post Machine Learning using Google’s Vertex AI appeared first on Indium.

]]>
Building the Right Architecture for MLOps https://www.indiumsoftware.com/blog/building-the-right-architecture-for-mlops/ Tue, 18 Oct 2022 07:14:25 +0000 https://www.indiumsoftware.com/?p=12715 Machine learning projects are expanding, with the global machine learning (ML) market expected to grow at a CAGR of 38.8%, from $21.17 billion in 2022 to $209.91 billion by 2029. To accelerate the speed of development and shorten the time to market, businesses are combining DevOps principles with ML development and data engineering. Called MLOps

The post Building the Right Architecture for MLOps appeared first on Indium.

]]>
Machine learning projects are expanding, with the global machine learning (ML) market expected to grow at a CAGR of 38.8%, from $21.17 billion in 2022 to $209.91 billion by 2029. To accelerate the speed of development and shorten the time to market, businesses are combining DevOps principles with ML development and data engineering. Called MLOps or Machine Learning Operations solution, it involves streamlining the production, maintenance, and monitoring of machine learning models and is a collaborative effort between IT, data scientists, and DevOps engineers. It involves automating operations using ML-based approaches to customize service offerings and improve productivity and efficiency.

Some of the benefits of MLOps include faster and easier deployment of ML models. It helps with continuous improvement in a cost, time, and resource-efficient way by facilitating collaboration between different teams and tasks. The models can also be easily reused for other use cases using MLOps. As validation and reporting are an integral part of the system, it makes monitoring easy.

To know more about how Indium can help you build the right MLOps architecture using Databricks

Get in touch

Preparing to Build the MLOps Architecture

The development of an MLOps project is just like any other project, but with a few additional steps to ensure an easy and seamless flow.

Setting up the Team: Planning and assembling the right team is the first step. Depending on how complex the project is, the team will include one or more ML engineers, data engineers to manipulate data from various sources, data scientists for data modeling, and DevOps engineers for development and testing.

ETL: For data modeling to develop machine learning algorithms, data needs to be extracted from various sources, and a pipeline created for seamless data extraction in the system. The data needs to be cleaned and processed using an automated system that helps with seamless transformations and delivery.

Version Control: Like in DevOps, version control plays an important role in MLOps too, and Git repository can be used for this as well.

Model Validation: In DevOps, testing is important and includes unit testing, performance, functionality, integration testing, and so on. The equivalent in an ML project is a two-step process – model validation and data validation.

Monitoring: Once the software has gone live, the role of DevOps ends until the time of further enhancement. In an MLOps project, though, periodical monitoring of ML model performance is essential. This is done to validate it against the live data using the original validation parameters. This will help identify any problems, and the modeling will have to be reworked

Must read: MLOps on AWS: Enabling faster

Databricks for MLOps Architecture: 5 Things to Consider

While this makes MLOps sound ideal and easy, in reality, one of the challenges it faces is the need for a huge infrastructure, including computing power and memory capacity that on-premise systems cannot meet without additional costs. Therefore, cloud architecture is a better alternative that allows for quick scaling up and down based on need and thereby keeps costs based on need.

It also needs constant monitoring due to the ever-changing requirements and the need for the models to reflect these changes. As a result, businesses must frequently monitor the parameters and modify the variables of the model as and when required.

Some key challenges may also arise in MLOps with regard to managing data, code, and models across the development lifecycle. Multiple teams handling the various stages of development, testing, and production collaborate on a single platform, leading to complex needs for access control and parallel use of multiple technologies.

Databricks, with its robust ELT, data science, and machine learning features, is well-suited for building the MLOps architecture. Some of the factors that make Databricks consulting services ideal for MLOps include:

Lakehouse Architecture: Databricks uses a Lakehouse architecture to meet these challenges and unify data lakes and data warehouse capabilities under a single architecture and use open formats and APIs to power data workloads.

Operational Processes: The process of moving the ML project through the development cycle should be clearly defined, covering coding, data, and models. Databricks allows the code for ML pipelines to be managed using the current DevOps tooling and CI/CD processes. It simplifies operations by following the same deployment process as model training code for computing features, inference, and so on. MLflow Model Registry, a designated service, is used to update code and models independently, enabling the adaption of DevOps methods for ML.

Collaboration and Management: Databricks provides a unified platform on a shared lakehouse data layer. In addition to facilitating MLOps, it allows ML data management for other data pipelines. Permission-based access control across the execution environments, data, code, and models simplifies management and ensures the right levels of access to the right teams.

Integration and Customization: Databricks used open formats and APIs, including

– Git

– Related CI/CD tools

– Delta Lake and the Lakehouse architecture

– MLflow

Additionally, the data, code, and models are stored in open formats in the cloud account and supported by services with open APIs. All modules can be integrated with the current infrastructure and customized by fully implementing the architecture within Databricks.

Managing the MLOPs Workflow: Databricks provides developers with a development environment to allow data scientists to build the ML pipelines code spanning computation, inference, model training, monitoring, and more. In the staging environment, these codes are tested and finally deployed in the production environment.

Check out our MLOps solution Capabilities

Indium’s Approach

Indium Software has deep expertise in Databricks and is recognized by ISG as a strong contender for data engineering projects. Our team of experts works closely with our partners to build the right MLOps architecture using Databricks Lakehouse to transform their business. Ibrix is an Indium Unified Data Analytics platform that integrates the strengths of Databricks with the capabilities of Indium to improve business agility and performance by providing deep insights for a variety of use cases.

Inquire Now! To know more about how Indium can help you build the right MLOps architecture using Databricks solutions.

The post Building the Right Architecture for MLOps appeared first on Indium.

]]>
CEOs and CXOs, how are you handling the shift to AI technologies? https://www.indiumsoftware.com/blog/ceos-and-cxos-how-are-you-handling-the-shift-to-ai-technologies Tue, 31 May 2022 04:45:42 +0000 https://www.indiumsoftware.com/?p=9905 Disruptors have set new norms in customer service, speed-to-market, and innovation in every industry. Artificial intelligence solutions have reached a tipping point, with prominent companies displaying ground-breaking achievements, altering marketplace, and distinguishing themselves in their fields. Strategic enablers such as automation, prediction, and optimization are at the heart of AI. The ability of your company

The post CEOs and CXOs, how are you handling the shift to AI technologies? appeared first on Indium.

]]>
Disruptors have set new norms in customer service, speed-to-market, and innovation in every industry. Artificial intelligence solutions have reached a tipping point, with prominent companies displaying ground-breaking achievements, altering marketplace, and distinguishing themselves in their fields. Strategic enablers such as automation, prediction, and optimization are at the heart of AI.

The ability of your company to automate routine processes, forecast results, and optimise resources is critical to its success. High-growth firms are, in fact, meeting the business imperatives—creating exceptional customer experiences, expediting product and service delivery, optimising operations, and profiting on the ecosystem. They are realizing these achievements while also meeting compliance and risk management needs at scale.

For cutting edge AI/ML solutions paired with data and analytics solutions, get in touch with us:

Contact us now!

Artificial intelligence-driven technologies taking over work at all levels of businesses has evolved into a vision in which AI serves as more of an assistant. It takes over various activities so that humans can focus on what they do best. With AI at their disposal, physicians can spend more time on treatment plans as AI tool will take complete ownership of medical scans. Similarly, a marketing professional can focus better on brand nuances as AI can accurately predict the consequences of various channel spen

What does AI and innovation bring to the table?

AI is being used by businesses to forecast business outcomes, streamline operations, increase efficiency, guard against cyberthreats and fraud, and uncover new market opportunities. These forecasts can assist leaders in staying one step ahead in the competition and be resilient to market volatility.

High-growth leaders, according to Forrester Research, invest extensively in AI. According to a Forrester poll, more than half of respondents expect a five-fold return on their AI investments. High-growth CEOs who invest $10 million can expect a $60 million return on their investment. In comparison to low-growth organisations, leaders spend twice as much on data and analytics and invest 2.5 times as much in AI and machine learning (ML) platforms.

Firms that invest in data scientists with hard-core abilities, such as the ability to create predictive, machine learning, deep learning, natural language processing (NLP), computer vision, and other sorts of models, expand quicker than those that do not.

Most leaders, on the other hand, are now looking to expand the usage of AI. This viewpoint implies that your platform should be built to aid in the operationalization and automation of model and tool management throughout your entire business.

Automation allows your team to refocus on high-value activities that capitalise on your unique selling points. Look for a technology that allows you to automate tasks like:

  • Preparation of data
  • Feature development
  • Machine learning algorithms selection.
  • Finding the best potential solution using hyperparameter optimization
  • Modeling with machine learning, deep learning

You might be interested in: AI/ML and Web 3.0 – The Future of the Internet

A piece of advice on those lines

Some of the concepts, such as “innovate with and for diversity,” are refreshingly prescriptive. Others, such as “reduce the risk of unfair bias,” are too broad or ambiguous to be relevant. The devil is in the details for IT and industry leaders interested in embracing any or all these ideas. Here’s a quick rundown of each principle:

  • Innovate with and for a diverse group of people: There are sure to be huge blind spots when the people envisioning and creating an AI system all look alike. Hiring a diverse team to design, install, monitor, and apply AI helps to close these gaps.
  • Transparency, explainability, and interpretability should all be designed into and implemented: Transparency relies on totally transparent “glass box” algorithms, whereas interpretability relies on techniques that explain how an opaque system like a deep neural network works.
  • Reduce the likelihood of unjust bias: There are over 20 alternative mathematical representations of fairness. The best one for your strategy, use case, and corporate values is determined by your strategy, use case, and corporate values. To put it another way, fairness is a subjective concept.
  • Examine and track the model’s fitness and impact: The epidemic served as a cautionary tale for businesses concerned about data loss. Companies should adopt machine learning operations (MLOps) to keep an eye on AI’s performance and explore using bias bounties to crowd source bias detection.
  • Encourage a responsible AI culture throughout the organization: By employing a chief trust officer or chief ethics officer, some companies are beginning to take a top-down approach to build a culture of responsible AI.
  • Responsibly manage data gathering and use: While the Business Roundtable approach places a premium on data quality and accuracy, it neglects to address privacy concerns. For ethical AI management, it’s critical to understand the interaction between AI and personal data.
  • Invest in a workforce that is AI-ready for the future: The nature of jobs for most people is more likely to be transformed than eliminated by AI, but most workers aren’t prepared. They don’t have talents, dispositions, or trust in AI to make it work for them. Employees can be better prepared to work alongside AI by investing in the robotics quotient, which is a measure of readiness.
  • Existing governance systems should be updated to account for AI: Ambient data governance, a technique for incorporating data governance into everyday data interactions and intelligently adapting data to user purpose, is well-suited to AI. In the context of AI governance, map your data governance initiatives.
  • Implement AI governance across the entire organization: Governance has become a nasty term in many corporations. This is not only regrettable, but also potentially hazardous. Find out how to get rid of governance fatigue.

The post CEOs and CXOs, how are you handling the shift to AI technologies? appeared first on Indium.

]]>