logo

Resource Optimisation Azure

Very Many Transfer Options At This Time

resource-optimisation

The cloud infrastructure provisioning will be the first step, but the next and most important task is to manage the resources.

When I say management it’s not only about maintaining updates, monitoring resources health, up-to date on the security patches, reporting metrics etc.,. One of the most important job is to make sure we are utilizing the resources optimally.

The resource optimisation involves keeping the utilization in the agreeable limits, and make it scalable for keeping the cost in a tab.

In this blog we are going to look at the various resources/scenarios, which usually incurs cost and how to optimize it.

Few pointers

  • This is more pertaining to Azure, but the techniques can be applied on other cloud providers as well.
  • A knowledge on the Azure resources, cost etc., will help in understanding the topics.
  • In this blog, we are covering few imperative resources only and hoping to continue in next pages / blogs etc.,

VIRTUAL MACHINE

The most important and basis of all resources is the Virtual machine, which sits in the Computation area of Azure.

What could go wrongHow to approach
Most of the time team, being dev / devops, intend to create resources with high configurations (CPUs, Memory, Disk..), thinking that it would be good for the applications using it. For instance, most of the times these two happenVM size – the size determine the CPU and Memory being allocated. Sometimes it might not be needed, to have high # of CPU and Memory.Disk – The premium disk is not needed for lower environments, and in some case it might not be needed in prod and pre-prod environments as well. So we can use Standard HDD or SSD.For non-prod environmentsConsult the team, and remove the unused VMs every fortnight (it’s up to your choice on the interval). Team / team leads, should be aware of the actions.Reduce the VM image size, even if it’s for a week or two. This reduces considerable amount.For prod: once you gone do dev we always have to be careful in updating resources. So, we would suggest very few approaches in this case.Try to convert the instances to reserved instances.
Fixed number of instances – behind a load-balancer or VMSS (Virtual Machine Scale Sets)When you have fixed # of instances, applications deployed in it might not use it to the full capacity.So, it’s better to go for autoscaling. By this, whenever demand raises number of instances will raise and vice-versa.
Keeping the resources running over the non utilized times. Some times few resources, which are non-trivial for production, may keep running in the nights and week-ends which incurs more cost.Schedule a script (function app / Logic app) to switch on/off VMs during grave-yard shifts and in week-ends. The option on scheduling off & on is available in Azure itself, but can’t do a bulk actions. So it’s better to have a custom script.
The application developers tend store the data in the disk, so usually end up in provisioning more data in VM itself.Advise the developers to utilize storage account or better try to add file storage as disk in VMs.

STORAGE

The data storage in Cloud has to be dealt, and handled with precise. Here it’s not only about Azure storage account, but also on the DB having data.

What could go wrongHow to approach
More number of unnecessary requestsThe first and foremost issue is putting huge stress on the DB, with number of requests to do operations on it. The application developer should be warned, at initial development phase, on this either before or after identifying the issue.The approach should be the application or any data tool connecting DB should try to get as much data as possible in limited # of connections. If needed, caching mechanism can be utilized (Redis cache in Azure) to retrieve data which won’t change and needed quickly.
Data storageThe next most obvious reason is storing huge number of not-so-immediate required data.Some times data, which are older for application sits in a place where it costs more. Move that to archival storage account – the retrieval takes more time but cost will be minimal.
Data transferAzure charges for data transfers – to the internet and also when transferring between regions.So you need to be careful in data consumption. If you are working in a data analytics project, try to work on local first before moving the application DB to cloud.
License cost of DBWhen you are provisioning DB servers for non-prod, it’s always better to go with available valid free tier. This would reduce the cost much further.If you have license for the DBs you own, then you can utilize the hybrid benefits.
ReplicationsThe replication of data available at levels: Local, Zone, Region etc.,. It’s better to keep Local strategy for non-prod environments. Keep the region / zone level replications in pre-prod and prod environments.

DATA TRANSFER

Few may ignore this, but this is one of main burning factor which consumes resource and the incurs higher cost

What could go wrongHow to approach
Redundancy in other region – When the data being transferred from one region to another, every bytes will incurs cost.Team ends up in sending data which no longer needs to be backed up or transferred.Whenever there is a requirement to setup the data redundancy to other region, just make sure you are transferring only the required data.
Internet egress – The ingress from internet into Azure is free of cost. However, the data coming out of Azure to internet will be charged.Whenever you are trying to work in data tool from your local machine connecting to Azure, just make sure you work first in the local set up of Azure based DB. This will reduce the cost first during development phase. When deploying the application in Azure, as much as possible make sure it is and the DB are in the same Availability Zone and region to keep the cost low.
Monitoring logs – The diagnostics, boot logs from computation resources been pumped to the log-analytics workspace or storage accounts etc., when configured.The few resources like AKS, APIM etc., based on the usage ingest huge amount of data in log analytics and this incurs cost.Just make sure that log analytics is in the same region as the resources. This will go long way in saving cost.
N/W watcher log – these are the logs for monitoring subnet traffic in Azure. Created based on region, this may contribute high cost if not monitored or configured correctly.The logs transferred usually gets into a cost.Make sure that when provisioning various environments to keep tab on the N/W watcher configuration. Avoid configuring it in development or non-prod scenarios.

DR (DISASTER RECOVERY) PLANNING

Any organization requires a perfect DR planning, whether it’s an on-premise or cloud setup. In Cloud, when doing a DR planning, few of the points to be considered are

  • In the DR plan have the low cost or no cost resource takes the precedence. By this way during DR you provision those resources first, example: N/W resources, Application G/W, VPN connections, DNS configuration. So no cost will incur but a base will be available ready.
  • If possible keep the RTO (Recovery Time Objective) for computation resources to start with minimum allowed, and not go below. With this, we can provision those resources only during the outage Ex: Application VMs, AKS etc.,
  • When coming to DB, RTO is probably higher, but RPO (Recovery Point Objective) should be pretty negligible based on the requirements. If you get higher RPO, try avoiding redundancy setup. Then try out the scheduled backup of data from the primary region and restoration in the DR region.

By avoiding redundancy, the continuous data sync between primary – secondary setup won’t be there which reduces the cost.

Join With Us

Whether you want to work with us or are interested in learning more about what we do, we’d love to hear from you. You can reach us at one of our offices below.

Share This :