Resource Automation in the Cloud: Datacenter Automation and Integration


This is the last post on our series about resource automation in the Cloud. Today, we will look at datacenter automation and integration.

What are the Benefits for Data Center Automation? First of all, it frees up IT staff. If more things are automated, you don’t need to allocate resources for that. Your IT can care about more important things. You should automate things that are repeatable, such as provisioning machines.
With Data Center Integration, you leverage best capabilities of
  • Existing systems
  • Processes
  • Environments

Key areas for Datacenter Automation are:

  • Reducing labor costs by allowing reduction or reallocation of people
  • Improving service levels through faster measurement and reaction times
  • Improving efficiency by freeing up skilled resources to do smarter work
  • Improving performance and availability by reducing human errors and delays
  • Improving productivity by allowing companies to do more work with the same resources
  • Improving agility by allowing rapid reaction to change, delivering new processes and applications faster
  • Reducing reliance on high-value technical resources and personal knowledge

A key driver for Datacenter Integration and Automation is SOA (Service Oriented Architectures). This allows much better integration of different services all over the datacenter. Drivers for Integration are:

  • Flexibility. Rapid response to change, enabling shorter time to value
  • Improved performance and availability. Faster reactions producing better service levels
  • Compliance. Procedures are documented, controlled and audited
  • Return on investment. Do more with less, reduce cost of operations and management

If you decide to automate tasks in your datacenter, there are some areas where you should start:

  • The most manual process
  • The most time-critical process
  • The most error-prone processes
Once these 3 processes are identified, enterprises should
  • Break donw high-level processes into smaller, granular components
  • Identify where lower-level processes can be „packaged“ and reused in multiple high-level components
  • Identify process triggers (e.g. End-user requests, time events) and end-points (e.g. Notifications, validation actions
  • Identify linkages and interfaces between steps of each process
  • Codify any manual steps wherever possible

 

Advertisements

Published by

Mario Meir-Huber

I work as Big Data Architect for Microsoft. With this role, I support my customers in applying Big Data technologies - mainly Hadoop/Spark - for their use-cases. I also teach this topic at various universities and frequently speak at various Conferences. In 2010 I wrote a book about Cloud Computing, which is often used at German & Austrian Universities. In my home country (Austria) I am part of several organisations on Big Data.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s