Quantcast
Channel: vCenter Operations – VMware Cloud Management
Viewing all 242 articles
Browse latest View live

vRealize Suite Lifecycle Manager 1.3 – What’s new

$
0
0

On July 10, 2018, VMware released an updated version of its vRealize Suite Lifecycle Manager.

vRealize Suite Lifecycle Manager (vRSLCM) automates Day 0 to Day 2 operations of the entire vRealize Suite, enabling simplified operational experience for customers. The vRealize Suite Lifecycle Manager automates install, configuration, upgrade, patch, configuration management, drift remediation and health from within a single pane of glass, thereby freeing IT Managers/Cloud admin resources to focus on business-critical initiatives, while improving time to value (TTV), reliability and consistency.

Whats New for 1.3?

 

Support for vRealize Network Insight

  • Support for versions 3.7 / 3.8
  • Specify Installation size that fits (Small, Medium or Large)
  • Install via configuration wizard
  • Export existing environment configuration into a JSON format to replicate environments
  • Separate license required for vRealize Network Insight

 

 

Content Management Enhancements – Now with vSphere Support!

  • Manage vSphere templates and Customization specifications
  • vSphere Content library integration (vSphere 6.0+ required)
  • RBAC on content tags

 

 

Product Upgrade Pre-Checker

  • Perform pre-validations before product upgrades
  • View recommendations of failures via downloadable report

 

 

Other Enhancements

  • Support for NTP Global or Individual Server Configuration
  • Grouping of vRealize product components in the environment creation wizard
  • Log insight content pack for monitoring your vRealize suite lifecycle manager instance.
  • Support for new vRealize suite versions via product support pack. – No need to upgrade LCM when new versions of vRealize suite products get released

What’s new Video

 

 

 

Documentation and Links

The post vRealize Suite Lifecycle Manager 1.3 – What’s new appeared first on VMware Cloud Management.


A comparative analysis on heath monitoring of Pivotal Container Service(PKS) using VMware Wavefront and vRealize Operations

$
0
0

Contributions from Alka Gupta and Dinesha Sharma

 

VMware Pivotal Container Service (PKS) comes with out-of-the-box integration with VMware Wavefront for real time health monitoring and analytics of your cloud native applications and platform. Pivotal Container Service also readily integrates with VMware vRealize Operations Manager. In this blog, we call out a comparative study on metrics and analytic details provided by vRealize Operations vs Wavefront so developers and platform reliability engineers (PREs) can make the best and appropriate use of both products in their data centers.

Both vRealize Operations and Wavefront are quite distinct in their value proposition for monitoring Pivotal Container Service. vRealize Operations is primarily designed for infrastructure monitoring. With Kubernetes management pack, it provides health status of nodes, namespaces, pods, containers and services.  Wavefront provides complete visibility in real time into each level of Pivotal Container Service clusters, upto the application level, using Heapster and kube-state-metrics.

vRealize Operations 6.7 has introduced  Application monitoring with Wavefront, which is a new capability that allows customers using vRealize Operations 6.7 to begin exploring Wavefront and offer consistent monitoring tools for DevOps – while maintaining visibility and control for the cloud administrator.. With Wavefront and vRealize Operations integration, one can extend vRealize Operations to monitor application specific services.

Table 1.0 below gives a summary of how Wavefront and vRealize Operations complement each other:

 

Wavefront vRealize Operations
Target Users DevOps, Developers, Platform Reliability Engineers Infrastructure Admins, Cloud Admins, Platform Reliability Engineers
Value Proposition Wavefront is specifically built to provide container monitoring at scale with following features:
  • Find leading problem indicators of containerized applications – at any scale – using real-time analytics
  • Improve developers’ and DevOps teams’ productivity with self-serve, customizable, Kubernetes, container and applications dashboards and metrics
  • Proactively detect containerized cloud service resource bottlenecks with intelligent, query-driven alerts
  • Correlate top level application metrics with Kubernetes orchestration metrics, down to the container and resource level
vRealize Operations is an infrastructure operations and monitoring tool for a software defined data center. With VMware vRealize Operations Management Pack for Container Monitoring, it provides the following benefits to virtual administrators:
  • Get complete Kubernetes topology of Namespaces, Clusters, Replica Sets, Nodes, Pods, and Containers for monitoring Kubernetes clusters.
  • The OOTB dashboard  provides troubleshooting insights by highlighting the Key Performance Index and alerts for various objects pertaining to Kubernetes clusters that are monitored.
Deployment Available only as a Software as a Software (SaaS)

 

Deployed as an on premise Software and also available as a SaaS offering.
How does it work? Wavefront’s out-of-the-box integration with Heapster allows metrics to be viewed for all clusters in Kubernetes inclusive of Pivotal Container Service. vRealize Operations requires container monitoring management pack. Further Kubernetes adapter needs to be configured for each cluster instance, to view all the clusters in vRealize Operations dashboard.

Table 1.0

 

Further comparative features analysis summary on Wavefront and vRealize Operations for Kubernetes health monitoring

  • Comprehensive Visibility into Kubernetes platform using Wavefront
    • The Wavefront platform provides Kubernetes and container dashboards, delivering visibility into high-level services and applications as well as granular container metrics. These include
    • Nodes: CPU and memory usage rate per node, bytes received and transfer rate per node, uptime of a nodes and networking errors per node.
    • Namespaces: Pod and container count by namespace, memory usage by namespace.
    • Cluster: Number of nodes, namespace, pods and pod containers && average node CPU and memory usage, cluster memory usage and average node storage usage.
    • Pods: Memory and CPU usage rate by pod, bytes received and transferred rate by pod, memory page faults by pod and networking errors by pod.
    • File system usage for all pod_name against its namespace, nodename and resource_Id can also be viewed.
    • Pod_Containers : Memory and CPU usage by container, memory page faults rate by container, file system usage by container and uptime of containers.
  • Kubernetes dashboard in vRealize Operations:
    • In vRealize Operations Kubernetes dashboard provides view of Kubernetes clusters and its objects, total metrics, alerts and health score.
    • View of node and pod performance, healthy nodes/pods and its properties, view of trend metrics of nodes/pods with respect to CPU and memory usage and disk IO read and write.
    • Toplogy view of a pod associated with other components in a cluster.
    • Topology view of health status of all cluster members with alerts

Detailed Comparative analysis of metrics with Wavefront and vRealize Operations:

 

1. Kubernetes cluster overview(summary)

Kubernetes dashboard lists number of pods, namespaces, nodes, pod containers for the selected cluster. It also displays an overview of memory/CPU/Storage utilization of node and memory usage of the cluster as shown in figure 1, users can click on individual tiles and modify/add Wavefront Query Language to narrow down search against a metric.

 

Figure 1

In contrast to Wavefront, cluster details in vRealize Operations are limited to only the health status summary of the cluster components such as nodes, namespaces, pods, containers and services as shown in Figure 2

Figure 2

 

2. Topological overview of cluster and its components in vRealize Operations

The topological view of cluster and its components can be viewed only in vRealize Operations Kubernetes dashboard. Further alerts and health details of cluster and its components can be viewed by clicking on the respective components as shown in Figure 3. Any failure at the node level can be detected via this feature.

Figure 3

 

3. Node Metrics

Wavefront displays node metrics for all the nodes of a Kubernetes cluster in a single graph as shown in Figure 4. nodes are distinguished by unique colors in the graph. Kubernetes Dashboard in Wavefront displays the following different node metrics: CPU and memory usage rate per node, bytes received and transfer rate per node, uptime of a nodes and networking errors per node. This graphical feature is unique to Wavefront and helps to do comparative analysis of all in the nodes with in a single graph.

Figure 4

Further, Wavefront Kubernetes dashboard has a unique feature called Go Live, where point in time data can be viewed for a given metric as shown in Figure 5.

Figure 5

Further granular view of the nodes can be viewed by hovering over the metrics which is also unique in Wavefront as shown in Figure 6

Figure 6

In vRealize Operations Kubernetes dashboard, after selecting a specific node, only Key node metrics can be viewed.  These metrics include Memory/CPU usage, Disk IO – Read/Write, Availability for the selected Node as shown in figure 7 and figure 8

Figure 7

Figure 8

In contrast to Wavefront, only single node graphical representation can be viewed in vRealize Operations by double clicking on the metrics of Figure 8.

Metric details per node are shown in figure 9

Figure 9

 

4. Pod Metrics

Wavefront displays Kubernetes pod metrics for all the pods of a cluster in a single graph as shown in figure 10, pods are distinguished by unique colors in the graph. Kubernetes Dashboard in Wavefront displays the following different pods metrics: memory and CPU usage rate by pod, bytes received and transferred rate by pod, memory page faults by pod and networking errors by pod; which helps to do comparative analysis of all in the pods in a single graph.  This graphical feature is unique in Wavefront.

Figure 10

Further granular view of pods can be viewed by hovering over the metrics which is also a unique feature in Wavefront as shown in figure 11

Figure 11

On the other hand, only Key node metrics (Memory/CPU usage, Availability, Number of containers) for the selected pod in vRealize Operations can be viewed by selecting the desired node as shown in figure 12 and figure 13.

Figure 12

Figure 13

In contrast to Wavefront , only single pod graphical representation can be viewed in vRealize Operations  by double clicking on the metrics as shown in figure 14

Figure 14

 

5. Namespace Metrics

Kubernetes dashboard in Wavefront displays number of pod/pod containers in a namespace, it also displays the CPU and memory usage of pod/pod containers in a namespace, each namespace is distinguished by unique color as displayed as shown in figure 15, this vibrant display of namespaces is available only in Wavefront.

Figure 15

Further granular metrics for the namespaces is displayed when mouse hovered over the graph as shown in Figure 16.

Figure 16

 

6. Pod Container Metrics

Pod container metrics can only be viewed in Wavefront Kubernetes dashboard as shown in figure 17.

Figure 17

Further Granular metrics of the pod containers is displayed when mouse hovered over the graph as shown in figure 18.

Figure 18

 

The Wavefront features listed so far are available for open source Kubernetes distribution. Specifically, for VMware Pivotal Container Service, Wavefront provides a custom dashboard with additional metrics as detailed below. Summary here

 

1. Overview:

VMware Pivotal Container Service dashboard in Wavefront lists number of pods (pods running, pods pending, pods failed), namespaces, nodes, pod containers (containers running, containers terminated, containers waiting, restarts of containers), deployments and Services. It also displays an overview of memory/CPU/Storage utilization of node and memory usage of the cluster as shown in figure 19, users can click on individual tiles and modify/add Wavefront Query Language to narrow down search against a metric.

Figure 19

 

2. Deployments:

VMware PKS dashboard in Wavefront provides following Kubernetes deployment details:

List of all deployments and their namespaces.

The number of available, unavailable and updated replicas per deployment.

Total number of deployments in each namespace, here each namespace is distinguished by unique color as displayed in figure 20.

Figure 20

 

3. Services:

VMware PKS dashboard in Wavefront provides following details of Services, the same is displayed in figure 21:

Information about service and its namespace with cluster IP.

Information about service and its namespace and type

Figure 21

 

4. Replicaset

VMware PKS dashboard in Wavefront provides following details of replica sets, the same is displayed in figure 22:

The number of replicas per replica set.

The number of desired pods per replica set.

Figure 22

 

Summary

  1. Wavefront provides detailed and granular visibility for containerized applications running in both Kubernetes and Pivotal Container Service. Wavefront focuses on DevOps and Developer centric monitoring of cloud native applications. It helps with co-relation across cloud native environments, analyze performance impact and prioritize new release features.
  2. vRealize Operations helps to plan, manage and scale SDDC, while giving flexibility to cloud admins to monitor, troubleshoot, balance workloads and plan their cost and capacity consumption.
  3. vRealize Operations 6.7 and higher adds more visibility into application monitoring with the Wavefront integration , both vRealize Operations and Wavefront combined gives cloud admins and DevOps a perfect blend for Cloud Native and vSphere Infrastructure monitoring.

 

The post A comparative analysis on heath monitoring of Pivotal Container Service(PKS) using VMware Wavefront and vRealize Operations appeared first on VMware Cloud Management.

vRealize Suite Lifecycle Manager 2.0 – What’s new

$
0
0

We are very excited about the latest release of vRealize Lifecycle Manager that went GA on September 20, 2018.

Along side vRealize Suite Lifecycle Manager we updated several of our key products in our Cloud Management space.

  • vRealize Automation 7.5 – YAY New UI
  • vRealize Business for Cloud 7.5
  • vRealize Code Stream 2.4.1
  • vRealize Operations 7.0 – Massive Release!
  • vRealize Log insight 4.7
  • vRealize Network insight 3.9.0

All combined, these products together make up our vRealize Suite, 2018 edition!

 

What is vRealize Suite Lifecycle Manager?

 

vRealize Suite Lifecycle Manager automates install, configuration, upgrade, patch, configuration management, drift remediation and health from within a single pane of glass, thereby freeing IT Managers/Cloud admin resources to focus on business-critical initiatives, while improving time to value (TTV), reliability and consistency. Automates Day 0 to Day 2 operations of the entire vRealize Suite, enabling simplified operational experience for customers.

 

Your Content, anytime, anywhere.

 

By bringing DevOps principles to the management of vRealize Suite content,  Content Management in LCM is an extensible, customizable framework that provides a set of release management processes for software-defined content with the ability to capture, version control, test, release and rollback. This Content Management makes it possible to dispense with the time-consuming and error-prone manual processes currently required to manage software-defined content. Supported content includes entities from vRealize Automation, vRealize Orchestration, vSphere 6.x, and now with LCM 2.0 we support vRealize Operations content. All of this content can be sourced controlled by Gitlab.

The best part is that vRealize Suite Lifecycle Manager is FREE for all vRealize suite customers!

 

What’s New for 2.0?

 

Certificate Management

  • Inventory of all certificates used across vRealize suite environments
  • Generate & Import certificates
  • Generate GSR
  • Apply certificates at product level
  • Replace Certificates

 

 

Patching for vRealize Suite Components

  • Centralized management for all vRealize suite Patches
  • Seamless download and patch deployments
  • Patch application history at product and environment level
  • Upload or download directly from myvmware.com

 

 

 

Content Management Enhancements

  • Support for vRealize Operations Content – Alerts, Dashboards, Reports, Metrics and Symptoms
  • Comprehensive content filtering
  • Release / Deploy multiple content across multiple endpoints
  • Manage tags across multiple content
  • First Class dependency mapping

 

 

 

 

Other Enhancements

  • Simplified on-boarding of brownfield environments. No need to provide Infrastructure, Networks, Certificates and License!
  • Support for deploying vRealize Automation Windows Machine for IaaS components
  • Deploy Template from ISO and leverage customization specs.
  • Bulk import data sources from vRealize Network Insight
  • Audit Reports
  • Bulk Import of vCenter servers.
  • Notifications

 

Documentation and Links

The post vRealize Suite Lifecycle Manager 2.0 – What’s new appeared first on VMware Cloud Management.

Start Running a Self-Driving Datacenter – vRealize Operations 7.0 Workload Optimization!

$
0
0

vRealize Operations 7.0 is the key to your new self-driving driving datacenter!  Going self-driving will save you time and money, reduce your number of fire-drill headaches and make you look like a superstar to your bosses.  To turn on self-driving you need to enable workload optimization which lets you automate the business and operational intent in your datacenters.  Workload optimization can provide benefits like driving better application performance, improving consolidation ratios, adhering to SLAs, ensuring datacenter compliance or even lowering costs.  What’s not to like?

Workload optimization is easily accessible from your Quick Start home page.  You’ll find it in the Optimize Performance pillar.

Configuring workload optimization is as easy as 1, 2, 3 and can be done via the Operational Intent, Business Intent and Optimization Recommendation widgets.

Step 1 – Configuring Your Operational Intent

Select the datacenter or custom datacenter you wish to configure then click on the EDIT button on the Operational Intent widget.

The first thing to configure it your target utilization objective (the workload optimization slider).  This dictates how vRealize Operation will move workloads between the clusters in the datacenter to best optimize it.  The default setting of moderate means workloads will only be moved when a cluster is facing resource contention.  Setting it to balance means workloads are evenly spread out across the clusters to drive the best possible performance in the datacenter.  On the other hand, with the consolidate setting Workloads are placed to maximize utilization to lower datacenter costs.  This is sometimes called “densification”.

The other configuration that needs to be set is cluster headroom which allows you to specify how much risk is acceptable?  Workload Optimization will move workloads between clusters in the datacenter and the headroom settings says when a cluster should be considered full.  It provides a buffer of space to reduces the risk from bursts or unexpected resource demand in the cluster.

Once both of these are set you simple click save to commit the settings.

Step 2 – Configuring Your Business Intent

That was easy!  Now we need to set up your business intent for your datacenter.   Click on the EDIT button for Business Intent.

In this screen you specify your business needs so workload optimization can ensure they are being met by placing the workloads on the correct hosts and clusters.  It does this by leveraging vSphere tags and placing workloads onto clusters and hosts where the tags match. Depending on your business needs you may wish to enable cluster based or host-based placement.  In this blog we will discuss cluster-based placement.  I will do a deep dive on host-based placement in an upcoming blog.

Let’s start by clicking on cluster-based placement which opens the tag selection area.  Here we need to specify what type of business intent we want to drive.  These are free text categories and are used to best describe your business needs.  You can use one of the out-of-the-box categories or make up one of your own.

In the drop-down you will be shown any cluster based vCenter tag categories that have been configured. Once you select your category you are shown all the associated vCenter tags.  Simply choose the tags you want to use to drive your business intent.

Once both are set you simple click save to commit the settings.

Step 3 – Configuring Automation Level

Now that you have your operational and business intent configured its time to turn workload optimization loose and start making your data center run better!  The Optimization Recommendation widget will show you if your datacenter is…um …optimized. A datacenter is flagged as Not Optimized if its operational intent is not being met.  For instance, if you set the datacenter up for balance it will be flagged as Not Optimized if the clusters are out of balance.

New to 7.0 is the idea of Tag Violations for business intent.  This means datacenters can be flagged as “Not Optimized” if your business intent is not met.  For instance, if you are trying to drive license enforcement and you have Oracle VMs running on Microsoft clusters a Tag Violation will be shown and the datacenter will be labeled as Not Optimized.  Even better, these tag violations can be resolved through workload optimization.

You have 3 options of how to run a workload optimization and deal with a Not Optimized datacenter: Optimize Now, Schedule or Automate.

If you wish to keep your hands on the wheel you can use the Optimize Now button to manually optimize the datacenter when you wish.  You can also use the Schedule button to run workload optimizations during your maintenance windows.

However, if you don’t feel like logging in a clicking a button or waiting for your maintenance window to fix these issues we have an answer for you: Automate it!  vRealize Operations 7.0 enables full automation of workload optimization so you can be sure your workloads are meeting both business and operational intents around the clock.  A simple click of the Automate button and vRealize Operations takes over.

If you wish to see this WHOLE THING working together I have created a video of how you can automatically set up cluster-based SLA Tiers in your datacenter in the attached video.  It’s a bit long, but really worth the time!

 

 

The post Start Running a Self-Driving Datacenter – vRealize Operations 7.0 Workload Optimization! appeared first on VMware Cloud Management.

Self-Driving all the way to the Host? Oh yeah Host Based Placement!

$
0
0

As you hopefully saw in my first blog, vRealize Operations 7.0 is the key to your new self-driving driving datacenter!  If you missed it you can read it HERE.  Enjoy!  But that’s not the end of the self-driving story.  You can now drive your business intent (license enforcement, SLAs, compliance, etc) all the way down to your hosts with Host Based Placement!  Let me explain…

Many customers have LARGE clusters containing hosts serving different business purposes or datacenters with single massive clusters in them.  Driving business intent at the cluster level just doesn’t make sense in these situations.  Instead they need the ability to create logical boundaries around hosts within the cluster and honor business intent within these boundaries (these hosts for Oracle, these hosts for MSFT, etc). Until now customers could only do this with DRS, but that is complex and maintaining DRS rules to do this is too difficult.  Wouldn’t it be GREAT if you could automatically drive your business intent from ONE place?  The answer is vRealize Operations!  With 7.0 you can:

  • Have your business intent jointly honored by vRealize Operations and DRS
  • Greatly simplify the creation of DRS rules for your business needs
  • Automatically fix business intent violations at the cluster AND host level

Let me show you how it works…

  • First go into the Business Intent widget in the Workload Optimization screen.

 

  • Next select Host Based tag placement.  Note: You can use either Host Based or Cluster Based intents, but not both in the same DC/CDC.  This capability also uses vSphere tagging to match VMs with hosts. Simply pick the tags you want to enforce/honor.
  • Host Based Placement works WITH vROps will create DRS rules to enforce your business intent between hosts.  Basically, the tagging information for VMs and hosts is used to create VM group and host group affinity rules.
  • If you have DRS rules for VM-VM and VM-Host affinity/anti-affinity then vROps will check them and let you know if your business intent conflicts with your user-created DRS rules. vROps will also check every 5 mins to make sure now NEW rules conflict with your business intent.  You have the option to review them and if you agree, the user-created rules will be disabled.  They can be enabled later if you decide to not use Host Based Placement.
  • vROps will now create DRS groups and rules in vCenter.  The objects created by vROps have naming conventions to make them easy to identify. For instance, each rule will follow a vROps_
    • <Tag Category>_<Tag>_AR format. For example vROps_License_Oracle_AR (the “AR” stands for “Automated Rule”)

 

This short video on Host Based Placement really explains it best.

The post Self-Driving all the way to the Host? Oh yeah Host Based Placement! appeared first on VMware Cloud Management.

Physical Infrastructure Planning with vRealize Operations 7.0

$
0
0

An important part of managing your self-driving datacenter is dealing with capacity planning to prevent shortfalls in capacity.  With the release of vRealize Operations 7.0, it is easy to perform What-If analysis for adding new servers.

To begin planning, you can find Physical Infrastructure Planning after you click on Plan under the Optimize Capacity pillar on the Quick Start page.

First, you must give the scenario a name.  Next, select the Datacenter or Custom Datacenter and the cluster that needs additional capacity.  Select the server type and quantity you wish to add in the scenario.

The Server Type field has a few options, it defaults to showing you the same server types that are already in the cluster, which makes it easy if you want to keep a homogeneous cluster.  You can also pick from a list of servers in the built-in cost database.  Lastly, you can enter you own CPU, memory, and cost for a custom server specification if needed.

As you can see below, the results of the analysis are shown in a simple and easy to read display.  The scenario results show how these 2 new hosts will increase time remaining from 137 days to 364 days at a cost of $19,268.  Below that, you can see where the new hosts are projected to be added and how they provide enough capacity to last nearly a year into the future.

If you would like to watch a video walkthrough of the Physical Infrastructure Planning What-If analysis, watch this video.

The post Physical Infrastructure Planning with vRealize Operations 7.0 appeared first on VMware Cloud Management.

Great Widget and View Enhancements in vRealize Operations 7.0

$
0
0

With the latest release of vRealize Operations 7.0 we introduced some new enhancements to many of the out-of-the-box widgets and views. Below is a quick description of several of the enhancements you can now use when creating cool new custom dashboards with the new dashboard creation canvas.

 

Perform Actions from the Alert Widget:

You can also filter the Alert widget to only show alerts that have associated actions:

Improved Line Chart Widget:

 

Improved Trend View:

 

 

Bar and Pie Chart:

 

 

List View Enhancement:

 

Summary:

These are just a few of the enhancements made in vRealize Operations 7.0. For more information about what is new in vRealize Operations 7, visit the “What’s New” blog post.

 

Other Blogs to Checkout:

New Dashboard Creation in vRealize Operations 7

Sharing Dashboards in vRealize Operations 7

What’s New in vRealize Operations 7

 

 

The post Great Widget and View Enhancements in vRealize Operations 7.0 appeared first on VMware Cloud Management.

Put Your Operations in Overdrive – Technical Overview of the vRealize Operations Management Pack for vRealize Orchestrator 2.0

$
0
0

If you are not already excited for vRealize Operations 7.0, maybe this will do the trick.  In this blog post I will provide the technical details for the latest version of the vRealize Operations Management Pack for vRealize Orchestrator 2.0.  This improved version on the management pack, which we introduced with vRealize Operations 6.7, brings the capability to add any vRealize Orchestrator workflow to the action framework in vRealize Operations – including your own custom workflows – for full automation of alerts and recommendations.  Buckle up, we’re going into overdrive!

Before we go straight in, let me share the requirements for this management pack:

  • vRealize Operations 7.0
  • vRealize Orchestrator 7.3, 7.4 and 7.5

Easier to Use Action Integration

 

In this release we get much easier workflow execution from the action menu.  For example, selecting the action “Manage Snapshots for VM” from a cluster resource gives us a pre-filled list of all VMs residing in the cluster so you can multi-select and select from a list of possible actions.

Likewise, selecting the workflow “Migrate a Virtual Machine with vMotion” provides pull down lists for each of the input options as well as a list of available host systems.

This is only the beginning – what comes next is really a game changer!

Any Workflow.  Any Time.

 

If you use vRealize Operations today, you are probably familiar with the Action menu found throughout the product user interface.  This allows users to easily launch tasks from within vRealize Operations for common tasks such as cleaning up snapshots, changing VM CPU and memory configuration, power operations on VMs and more.  Version 1.0 of the management pack brought new workflows that were powered by vRealize Orchestrator – including some new host operations.

Now in version 2.0 you can add ANY workflow to the action framework.  This creates a new and practically infinite set of options for troubleshooting and remediation.   For example, in the screenshot below, you can see that I have enriched the available actions for hosts, to include options for managing NTP settings.

This workflow is a custom workflow I created, from an example I found on vBrownbag.com, to solve a problem I frequently hear from customers.  Notice that when I run the workflow, the host input has been selected for me already making it easy to use these workflows with any vSphere object.

The other input for the workflow is the list of NTP servers.  I can simply provide a comma separated list of IP address or FQDN enclosed in parenthesis, brackets or braces/curly brackets.  This matches the workflow inputs in vRealize Operations.  Here is the workflow, and you see the inputs as well as the output.

The workflow logging is returned to vRealize Operations and logged in recent tasks.

 

Boost Your Recommendations with Actions

Having recommendations for alerts, providing guidance for what you should do next is good.  Having recommendations that have an associated action that allows you to quickly execute the recommendation is better.

Now you can have the best by combining your own vRealize Orchestrator workflows with recommendation actions.  This allows you to automate your organizations operations run book, reducing the risk of human error and providing an audit trail of the workflow through vRealize Operations history.

For example, the vSphere Security Configuration Guide alerts in vRealize Operations will let you know if a host has not been configured for NTP.  Here you can see I’ve added a recommendation with an associated action to set NTP configuration on the host to the correct values.

One click, done!  And the alert will automatically clear within the next 5 minutes as vRealize Operations checks the host configuration.

Fully Automated Remediation

 

Now, if you aren’t impressed by now and your mind isn’t going crazy with ideas on how you can use this management pack in your environment, then check out automated actions.  Why wait for someone to log into the product user interface, find the alert and click the recommended action?  Instead, simply enable the action as “automated” and when the alert triggers, your workflow will be run.

For this to happen, this is a requirement that the workflow only have a single input of a vCenter resource kind (such as a VM, Host System, Cluster Compute Resource, etc).  So, you will need to take your existing workflow and add a “wrapper” around it so that you can handle any additional parameters without user interaction.

Let’s look at the set NTP workflow from the last section.  The workflow requires inputs such as the NTP servers.

But, I can use vRealize Orchestrator configuration elements to provide those values and then wrap the set NTP workflow into a new one with a single input.  I will name this “reset NTP” and then associate it with the recommendation.

Now, I just need to go into the policies to make this an automated action and this alert will run the action to reset the NTP configuration to the required values.

Deeper Dive

 

I will follow up with a blog post that goes much deeper and provides step-by-step instructions on how to use the management pack with my workflows for NTP configuration of hosts.  If you are comfortable with vRealize Orchestrator and want to give it a shot, you can find the workflow package on Sample Exchange at VMware {code}.

 

 

 

The post Put Your Operations in Overdrive – Technical Overview of the vRealize Operations Management Pack for vRealize Orchestrator 2.0 appeared first on VMware Cloud Management.


Migration Planning with vRealize Operations 7.0

$
0
0

Previously, I showed how to perform What-If analysis for adding new servers with vRealize Operations 7.0. Today, I’ll show how you can perform What-If analysis for migration to VMware Cloud on AWS or native AWS.

To begin planning, you can find Migration Planning after you click on Plan under the Optimize Capacity pillar on the Quick Start page.

Migrate to VMware Cloud on AWS

In this first scenario, I’ll look at migrating to VMware Cloud on AWS.  The important part here is selecting VMware Cloud on AWS and setting the Application Profile options.  For the Application Profile, you can manually enter the specifications for your workload or import existing VMs to have it automatically populate for you.

When importing existing VMs, you have an option to filter by VM Name, vCenter, VM Tags, or Custom Groups.  In this scenario, I used the filter to select all VMs in the SC2VC02 vCenter.

The scenario results show the number of hosts needed and projected utilization.  In this scenario, that’s 4 hosts while utilizing 69% of the CPU.  Costs are broken down into on demand, 1-year subscription, and 3-year subscription.  The costs on the left show the cost of the VMs per month, which is a subset of the total cost for the 4 hosts.  The monthly cost of the 4 hosts are shown to the right.

A video walkthrough Is available for the What-If migration to VMware Cloud on AWS scenario below.

Migrate to Native AWS

In this scenario, I’ll look at migrating to native AWS.  This time, I’ll select Amazon Web Services and the AWS Region.  Since I used “Import from an existing VM” already, I’ll manually enter a workload profile this time.

The analysis results show that the I’ll need T2 medium instances at a cost of $41 per month each or $4054 total.

Lastly, if I want to see the cost for a different region, just select a new region and click Run Scenario, and the results will update to reflect the new region.

A video walkthrough Is available for the What-If migration to native AWS scenario below.

The post Migration Planning with vRealize Operations 7.0 appeared first on VMware Cloud Management.

Cloud Management Platform (CMP) – Intelligent Provisioning and Optimization

$
0
0

One of the main drivers of a Cloud Management Platform (CMP) is to have the different solutions work together seamlessly.  That means the business, automation, compliance, operations, optimization, capacity, etc…  All these components need to work to provide you with a unified feel and flow.  Without that its really not much of a CMP…  its just a bunch of features and functions cobbled together.

One such integration point that is critical to any CMP, and to your business as a whole, is that of your automation and operations tools.  For VMware that means vRealize Automation (vRA) and vRealize Operations (vROps).  While these two solutions have always delivered better value together, the latest releases of vRA 7.5 and vROps 7.0 have brought major enhancements to the VMware CMP solution.

Initial Workload Placement

The ability to properly provision new workloads into your environment is of upmost importance.  Placing it incorrectly could lead to all sorts of issues for compliance, SLAs, application performance, cost etc.  It also means you will just need to manually move the VM later!  The initial workload placement process between vRA and vROps takes care of all of this for you.

vRA provides the governance (infrastructure privileges – what users have access to what part of your infrastructure) and business intent (SLAs, license compliance, compliance and cost).  vROps on the other hand is responsible for the operational intent (densification, balance, headroom and application performance).  Together they form a powerful solution for deploying your workloads.

  1. It all starts in the vRA customer portal where a user can deploy new workloads and applications from the self-service catalog. vRA’s reservation policies dictate where the workload(s) can be placed in the environment based on the blueprint and the user’s privileges.  Does it need to run on PCI compliant clusters?  Should it run on the fastest storage?  Should it be placed in the test/dev datacenter?
  2. The list of possible places (e.g. datacenter or custom datacenter) to deploy the workloads is sent to vROps which holds your operational intent.
  3. If the deployment is for business-critical apps where performance is paramount, then a balanced approach is recommended. But if the workloads to be deployed are for testing, then maybe a more consolidated approach is warranted.  vROps also ensures that any headroom settings you may have implemented, to lower the possibility of contention from sudden resource spikes, are honored.  Based on your configured operational intent vROps determines the BEST place to deploy these workloads (e.g. cluster).
  4. The placement recommendation is then sent back to vRA.
  5. vRA deploys the workload to the correct cluster. DRS will then place the workloads on the appropriate host(s) within the cluster.

DONE!  Simple and sweet.

 

Ongoing Workload Optimization

As we’ve discussed, correctly placing workloads in their initial destination is important.  Of course, we hope it stays there throughout its lifetime and never needs to move anywhere, but let’s be honest that’s never the case.  Applications change, workloads are added or deleted, HW is refreshed and all of this can affect the health of your workloads/applications.

Together vRA and vROps provide automated workload optimization functionality that will ensure the business and operational intent of your workloads are met throughout the lifecycle of the applications.

  1. vROps is continuously looking for optimization opportunities based on your operational intent. Is a cluster experiencing contention?  Are there places where we can consolidate?  Are any of the clusters breaching my configured headroom setting?  If the answer to any of these is “yes” vROps will start a workload optimization process to resolve it.
  2. Before recommending any moves vROps needs to understand the business intent of the workloads. To do this it grabs the reservation policies from vRA which tell vROps where workloads CAN be placed.  With the knowledge of the current state of the infrastructure and your business needs, vROps formulates a workload optimization placement plan.
  3. This plan is broken down into the actual moves that need to be made to bring the environment back to an operational and business “green state”.
  4. The list of moves is sent to vRA.
  5. vRA executes the plan thereby ensuring BOTH products in the solution are aware of the changes.

Summary

Together vRA and vROps will help you drive the BEST possible value out of your environment, keep your applications happy and healthy, help lower your overall costs, drive compliance and governance and just make your job easier overall.

Want to see it all in action?  Check out this video.

https://youtu.be/SBY_NO_LvIs

The post Cloud Management Platform (CMP) – Intelligent Provisioning and Optimization appeared first on VMware Cloud Management.

Enhancements to Virtual Machine Memory Metrics in vRealize Operations

$
0
0

Memory metrics for Virtual Machines have changed in recent releases of vRealize Operations to make managing your SDDC better. In this blog, I will explain these changes to help you understand how they help you. To start things off, let’s define a couple of important metrics.

 

Active Memory

By definition, Active Memory is the “amount of memory that is actively used, as estimated by VMkernel based on recently touched memory pages.” Since a virtual machine isn’t constantly touching every memory page, this metric essentially represents how aggressive the virtual machine is using the memory allocated to it. What this means is that memory utilization, as seen from within the guest OS via Task Manager or top, will almost always be greater than Active Memory. Refer to Understanding vSphere Active Memory if you want to read a more detailed explanation of Active Memory. In vRealize Operations, this metric is called Memory|Non Zero Active (KB).

 

Consumed Memory

By definition, Consumed Memory is the “amount of host physical memory consumed by a virtual machine, host, or cluster.” That article also states that “VM consumed memory = memory granted – memory saved due to memory sharing.“ What this means is Consumed Memory can include memory that the guest OS considers free. If you compare Task Manager or top to Consumed Memory, you will see that Consumed Memory is almost always larger. In vRealize Operations, this metric is called Memory|Consumed (KB).

Here is a screenshot comparing Task Manager with Memory|Non Zero Active (KB) and Memory|Consumed (KB).

 

History Lesson

Now that we know what Active and Consumed memory are and how they relate to what the guest OS shows, it’s time for a short history lesson of vRealize Operations (and you thought you were done with history after high school).

 

vRealize Operations 6.6.1 and Older

vRealize Operations 6.6.1 and earlier relied on Active Memory when calculating utilization and demand. This meant that memory utilization always appeared lower than what you see in the guest OS. Active Memory was used by the capacity engine, which meant that sizing recommendations were also based on Active Memory. What this meant was the recommendations were usually quite aggressive.

 

vRealize Operations 6.3

The release of vRealize Operations 6.3 brought support for collecting in-guest metrics via VMware Tools. These metrics weren’t used by any vRealize Operations content (yet), but they were available for you to use. This was an awesome addition because it gave additional visibility into the guest’s perspective without needing another agent. As you can see from the list of metrics below, this meant memory utilization was now available. Note that not all these metrics are collected by default, but you can enable the one you need using policies.

Guest metrics added in vRealize Operations 6.3:

  • Guest|Active File Cache Memory (KB)
  • Guest|Context Swap Rate per second
  • Guest|Free Memory (KB)
  • Guest|Huge Page Size (KB)
  • Guest|Needed Memory (KB)
  • Guest|Page In Rate per second
  • Guest|Page Out Rate per second
  • Guest|Page Size (KB)
  • Guest|Physically Usable Memory (KB)
  • Guest|Remaning Swap Space (KB)
  • Guest|Total Huge Pages

 

vRealize Operations 6.7

The release of vRealize Operations 6.7 was a milestone release because it really helped improve usability and simplify how you use the product. There are a few critical changes related to memory monitoring, such as a brand-new capacity engine and the elimination of redundant and unnecessary metrics. The most important change, related to memory metrics, is it utilizes the Guest|Needed Memory (KB) metric, which is collected via VMware Tools that was added in vRealize Operations 6.3. This change was made to greatly improve the quality of projections from the capacity engine as well as rightsizing.

There are some situations where the guest memory metrics can’t make it to vRealize Operations such as VMware Tools not being installed, running an unsupported version, etc. Knowing that the data may not always be available, Consumed Memory is used for failback. Consumed Memory was selected as the failback metric because, as shown above, it’s more conservative than Active Memory. The primary metrics affected by these changes are Memory|Usage (%) and Memory|Utilization (KB).

Typically, you would see that Guest|Needed Memory (KB) and Memory|Utilization (KB) are nearly identical (unless there is an issue collecting the metric from VMware Tools). If there is an issue collecting Guest|Needed Memory (KB), you will see that it correlates with Memory|Consumed (KB) instead.

Memory|Utilization (KB) is the metric used by the capacity engine and therefore rightsizing recommendations. As you can see, it’s advantageous to ensure that Guest|Needed Memory (KB) is collecting from VMware Tools to get the best quality recommendations.

By now, I’m sure you’re wondering about the actual formula used. If guest metrics from VMware Tools are collecting, Memory|Utilization (KB) = Guest|Needed Memory (KB) + ( Guest|Page In Rate per second * Guest|Page Size (KB) ) + Memory|Total Capacity (KB) – Guest|Physically Usable Memory (KB).  If guest metrics from VMware Tools are not collecting, Memory|Utilization (KB) = Memory|Consumed (KB).

 

vRealize Operations 7.0

With the release of vRealize Operations 7.0, there was a tweak made to Memory|Usage (%) based on customer feedback. Memory|Usage (%) was changed to prefer Guest|Needed Memory (KB) from VMware Tools, but it now fails back to Memory|Non Zero Active (KB) if it’s not available. This change allows you to use Memory|Usage (%) to show an aggressive percentage and Memory|Workload (%) to show a conservative percentage in dashboards and reports.

Memory|Utilization (KB) remains unchanged from vRealize Operations 6.7. Memory|Utilization (KB) is still the metric used by the capacity engine and rightsizing recommendations. Again, it’s important to ensure that Guest|Needed Memory (KB) is collecting from VMware Tools to get the best quality recommendations from vRealize Operations.

 

Validation

Now that you know the history, I’m sure you’re wondering how to ensure it’s working optimally. As you can see, there are many components needed for the feature to work. It’s important to ensure each of these requirements are met.

  • vCenter Server 6.0 Update 1 or newer
  • ESXi 6.0 Update 1 or newer
  • VMware Tools 10.3.2 Build 10338 or newer for Windows
  • VMware Tools 9.10.5 Build 9541 or newer for 64-bit Linux
  • VMware Tools 10.3.5 Build 10341 or newer for 32-bit Linux
  • Disconnect/reconnect the host from vCenter as mentioned in KB 55675

I realize the list of requirements is long and it can be challenging to track in large environments. To help, I’ve created a dashboard to help you identify VMs that aren’t collecting memory from the guest OS. You can find the dashboard along with install instructions on the vRealize Operations Dashboard Sample Exchange site.

Here’s to better managing your memory and your capacity!

The post Enhancements to Virtual Machine Memory Metrics in vRealize Operations appeared first on VMware Cloud Management.

And the Workload Optimization Train Keeps on Rollin’…

$
0
0

If you have been paying attention to our blogs, you’ve heard us preach about the importance of enabling continuous workload optimization in your environments.  You’ve learned about Business and Operational intent, a main component of your self-driving datacenter, which allows you to configure vRealize Operations to meet your specific datacenter goals and purpose.  You’ve seen how optimizing workloads can help you save money on licensing costs, drive performance and meet SLAs, reclaim and repurpose unused hardware, and ensure you workloads are running on compliant infrastructure.  All good things!

Over the last few releases of vRealize Operations we have made huge improvements to our workload optimization feature set by adding in host-based placement, scheduling and automation, simplified configuration workflows, tag violations, and much more.  Well the train keeps rolling in vRealize Operations 7.5 as we add three key features: vSAN Support, Host Group Optimization, and Improved Tag Violation Visualizations.

vSAN Support

Let’s start with the first one, vSAN Support.  vSAN will be an important component for you as you build out your Software Defined Datacenter (SDDC).  Because it is important to you, it needed to be supported by vRealize Operations, so with 7.5 you can run Workload Optimization on vSAN clusters. But in order to do it right we needed to take some of the nuances of vSAN into consideration so vRealize Operations 7.5 is Resync aware, respects vSAN Slack Space, and honors Storage Policies.  Let’s take a quick look at each of these.

Resync Aware

What is a vSAN resync?  Simply put it’s the creation, movement, or repair of data to ensure assigned levels of resilience.  vSAN must ensure availability of data under a variety of planned and unplanned conditions, it automatically manages the placement of data in accordance to the assigned storage policies of a VM.   This placement of data is an ongoing operation that occur at any time vSAN determines the need to do so.  This creates “resynchronization traffic.”  This is simply I/O processing that is not being generated by the guest VM, but rather, the system, in order to meet the performance and resilience settings as prescribed by the storage policy.  This can be triggered as a result of object policy changes, host or disk group evacuations, or object and component repairs.  vRealize Operations will not generate a workload optimization plan if any vSAN clusters are currently running a resynchronization, in order to not get in the way during this time.  Simple.

Respects Slack Space

Since vSAN is a distributed storage solution, it needs free space to perform certain actions that are transparent to the user.  This free space is known as Slack Space.  Any number of changes (e.g. changing a policy, host/disk group evacuations, on disk format changes) will require the use of this slack space temporarily to adjust to the new conditions.  vRealize Operations is aware of this need and will not make any workload optimizations recommendations that may infringe on the slack space requirements of vSAN.

Honors Storage Policy

Storage Policies are an incredibly powerful tool that allow you to define storage protection and performance requirements using a set of rules, that are applied prescriptively to VMs and VMDKs. It allows you to easily set Failures to Tolerate (FTT) at various levels to meet demands of business.  Simply define the outcome in the storage policy, then apply it to the VM.  It’s that easy.  vRealize Workload Optimization will NOT move a VM if it breaks the Storage Policy of that VM or VMDK in the process.  It leverages vCenter and Storage vMotion to verify the target location will properly support the storage policies before making any moves.

Host Group Optimization

The next improvement comes into play when you are using the host-based placement feature of Workload Optimization.  As you may recall, host-based placement allows you to drive your business intent down to DRS where vRealize Operations automatically created VM groups, host groups and affinity rules for you to meet your business objectives (e.g. license enforcement).  Previous versions gave you visibility into the cluster demand for CPU and memory.  With 7.5 you can also see CPU and memory demands on the host groups themselves.

That’s nice, but we didn’t stop there.  We also added host group optimization which will alleviate stress on host groups when they are overloaded and running out of resources.  This means if a host group, like the one above, goes non-green due to the high level of resource requests vRealize Operations will set the Not Optimized flag and will automatically move some of the VMs to another cluster with a similarly tagged host group.  Ensuring your application are performing well all while meeting your business intent.

Let’s look at a simple example.  We start with an environment on Day 0 where we want to drive host-based license enforcement.  vRealize Operations and DRS have created the host groups and affinity rules to ensure we are meeting our business goals.  As we add workloads to the environment they are running on the appropriate hosts with DRS and vRealize Operations maintaining performance balance.

 

As more and more workloads are added (note the workloads with asterisks) we start to see a performance hit on Host 8 which is a part of the MSFT host group.

DRS is unable to resolve this resource contention as there are no other hosts on this cluster that are valid for these workloads.  vRealize Operations Workload Optimization will automatically run a workload plan and move some of the workload from Host 8 to Host 4 on Cluster 1 thereby relieving the stress on the VMs and ensuring we are still in meeting our licensing restrictions.

Improved Tag Violation Visualizations

The last improvement is my favorite one because I have already seen the benefits from it in my work lab about a dozen times or more.  Its pretty simple, but beneficial.  In previous versions vRealize Operations could flag a datacenter as Not Optimized if it wasn’t meeting your Business Intent (see below).  It would show you the vCenter tags that were not being honored, but not much more.  You had to start looking around as to what cluster, host and VM was causing the trouble.  Kind of tedious.

Now, in 7.5, it shows you the tag being violated, the cluster its associated with, AND the VM(s) that is causing the violation!  A huge time saver.

Just give in and download a trial of vRealize Operations 7.5 and try it out!  You can also find more demos and videos on vrealize.vmware.com.  Explore and find out for yourself how Workload Optimization has improved in 7.5!

The post And the Workload Optimization Train Keeps on Rollin’… appeared first on VMware Cloud Management.

Allocation Model for Capacity Management in vRealize Operations 7.5

$
0
0

Capacity management is a critical aspect of managing vSphere clusters. Capacity management goals can vary depending on the type of VMs running in the cluster or based on business requirements. Failure to manage capacity based on business goals can lead to capacity shortfalls or performance problems.

In vRealize Operations 6.7 and 7.0, there was only the Demand model, but with the release of vRealize Operations 7.5, Allocation model is now available. Allocation model unlocks additional use cases that will help guide you towards more efficient utilization of your clusters and better projections of future utilization.

Demand Model

The Demand model was the only capacity model available in vRealize Operations 6.7 and 7.0. The reason it’s called demand model is because it looks at the demand for resources in the cluster to determine the amount of capacity needed. Demand is utilization plus unmet demand due to contention like CPU Ready and Co-Stop. The goal with demand model is to drive towards the most efficient utilization a cluster based on the actual utilization of resources in the cluster.

Allocation Model

I speak to a lot of customers about capacity management, and allocation model comes up in several use cases. The first one is avoiding overcommitment for business critical workloads. In these environments, the cost of the unused resources is not worth the risk of overcommitting resources.

The next use case is showback and reporting. There are typically restrictions such as contractual obligations or SLAs that mandate capacity not be overcommitted beyond an agreed upon ratio. Note these restrictions are usually non-technical.

Some customers like to do procurement planning based on overcommit ratios. A comfortable overcommit ratio is determined, and that’s what is used to project utilization into the future. The overcommit ratio is intended to be a rough estimate of utilization, e.g. 4:1 CPU overcommit ratio means that on average each vCPU will only run 25% utilization.

Many customers I have talked with have historically used Excel for capacity planning because it’s easy to do the mathematical formulas and charts. Getting the data needed into Excel is time consuming so overcommit ratios are used instead of the actual demand for resources. Since overcommit ratios don’t consider actual utilization, there is a real risk of utilization spikes causing performance problems when the wrong overcommit ratio is used.

Configuration

By default, vRealize Operations 7.5 only uses the demand model. There is no configuration necessary to enable it, and it cannot be disabled. The reason why it can’t be disabled is due to the nature of demand model being based on the actual demand for resources in a cluster. If a cluster is running out of capacity due to high demand, that could be a critical problem because failure to address the capacity shortfall will most likely lead to performance problems.

Allocation model, on the other hand, is not enabled by default. Enabling allocation model can be done from the Assess Capacity page by clicking on the icon shown in the screenshot below.

The overcommit ratios for the cluster can be entered in the settings box. Note the Affected Policy at the top to see which policy will be changed. It’s possible to configure a different overcommit ratio for each cluster. For example, development vs. production vs. business critical environments may need different ratios. If different overcommit ratios are needed, a policy will be needed for each ratio and applied to the cluster before making the change to the settings. The example screenshot below shows the policy named Allocation Model has been configured for allocation model. More info on creating policies is available in the product documentation.

Assessing Capacity

The beauty about how allocation model has been implemented in vRealize Operations 7.5, is it uses the same capacity analytics engine that was introduced in vRealize Operations 6.7. That means projections for Time Remaining and Capacity Remaining now work for both demand and allocation model equally.

Time Remaining is based on the most constrained resource, for example, 23 Days until the Production_North cluster runs out of memory based on allocation model.

An excellent example of why demand model is always enabled is shown with CPU in the screenshot below. The cluster is projected to run out of CPU in 76 days based on demand in contrast to 184 days due to allocation. In this situation, if only allocation model was enabled, performance problems could happen around 76 days in the future even though the overcommit ratio has not been reached. If this is deemed acceptable from a demand perspective, lowering the overcommit ratio is suggested.

Capacity Allocation Overview Dashboard

If you were a user of vRealize Operations 6.7 or 7.0, you may have used the Capacity Allocation Overview dashboard to track overcommit ratios. This dashboard has been updated to use the configured overcommit ratios, which helps a lot if there a multiple overcommit ratios throughout an environment like for development vs. production.

Conclusion

While I’ve covered how to enable allocation model, shown how it affects projections, and use cases for allocation model in this post, those are not the only areas affected by allocation model in vRealize Operations 7.5. A video walkthrough is available HERE. Keep an eye here for future posts covering allocation model’s impact on reclamation, Virtual Machines remaining, and costing. All exciting stuff!

I hope you have a better understanding how allocation model can be used in vRealize Operations, and why it might be needed for some situations. If you don’t have vRealize Operations today, you can download a trial of vRealize Operations 7.5 and try it out in your environment! You can also find more demos and videos on vrealize.vmware.com.

The post Allocation Model for Capacity Management in vRealize Operations 7.5 appeared first on VMware Cloud Management.

vRealize Operations REST API: Create Application and Application Tier Objects

$
0
0

I’ve been working with my colleague John Dias on a demo that involves deploying VMs via Cloud Automation Services and automatically configuring monitoring them with vRealize Operations. We needed a way to create Application and Application Tier objects during the provisioning process. Looking through the REST API documentation, there are no API calls specifically for creating Applications. I knew that behind the scenes, an Application and the Application Tiers are just objects with specific object types and relationships. I then thought that there are APIs to create objects and set relationships, so maybe I can string together those API calls to create an Application. It turned out to be simple to accomplish with only public APIs. Since I believe this could be useful for others in similar situations, I’d like to share the process.

Before I get started, I’d like to mention that all of the requests shown here are included in John Dias’ Postman collection. If you need help setting up Postman to use the collection, please check out this video. I will only cover the new request added to the collection.

 

Get Container Adapter Instance ID

The first step is to find the ID for the container adapter instance, which will be used by a couple of the subsequent calls.

Request

GET https://{{vrops}}/suite-api/api/adapters?adapterKindKey=Container

Response

{
 "adapterInstancesInfoDto": [
 {
 "resourceKey": {
 "name": "Container",
 "adapterKindKey": "Container",
 "resourceKindKey": "ContainerAdapterInstance",
 "resourceIdentifiers": [
 {
 "identifierType": {
 "name": "ClusterId",
 "dataType": "STRING",
 "isPartOfUniqueness": true
 },
 "value": "node1"
 }
 ]
 },
 "collectorId": 1,
 "collectorGroupId": "1f02507e-8a68-46bd-b765-af5229e65adb",
 "monitoringInterval": 1,
 "numberOfMetricsCollected": 0,
 "numberOfResourcesCollected": 0,
 "lastHeartbeat": 1560807657777,
 "lastCollected": 1560807637811,
 "messageFromAdapterInstance": "",
 "links": [
 {
 "href": "/suite-api/api/adapters/b76db3e0-5b7c-4755-ad35-d9c40c3f1628",
 "rel": "SELF",
 "name": "linkToSelf"
 },
 {
 "href": "/suite-api/api/credentials/",
 "rel": "RELATED",
 "name": "linkToCredential"
 }
 ],
 "id": "b76db3e0-5b7c-4755-ad35-d9c40c3f1628"
 }
 ]
}

Parsed Response Body

While the response is an array of objects, there should only be one object return. We need to store the id, which I’ll call containerAdapterInstanceId. In this example containerAdapterInstanceId = b76db3e0-5b7c-4755-ad35-d9c40c3f1628.

 

Create Application Object

The next step is to create the application object. In this example, I use a pre-request script in Postman to store the application name, “Postman” in the variable named applicationName on the Pre-request Script tab.

Request

POST https://{{vrops}}/suite-api/api/resources/adapters/{{containerAdapterInstanceId}}

Request Body

In the request body below, “{{applicationName}}” represents the name of the application being created, which is automatically replaced by the value entered in the Pre-request Script tab.

{
 "creationTime": null,
 "resourceKey": {
 "name": "{{applicationName}}",
 "adapterKindKey": "Container",
 "resourceKindKey": "BusinessService",
 "resourceIdentifiers": []
 },
 "resourceStatusStates": [],
 "resourceHealth": null,
 "resourceHealthValue": null,
 "dtEnabled": true,
 "monitoringInterval": 5,
 "badges": [],
 "relatedResources": [],
 "identifier": null
}

Response

{
 "resourceKey": {
 "name": "Postman",
 "adapterKindKey": "Container",
 "resourceKindKey": "BusinessService",
 "resourceIdentifiers": []
 },
 "resourceStatusStates": [],
 "dtEnabled": true,
 "monitoringInterval": 5,
 "badges": [],
 "relatedResources": [],
 "links": [
 {
 "href": "/suite-api/api/resources/7b891296-9ade-47fa-9d23-d56071fb8b3d",
 "rel": "SELF",
 "name": "linkToSelf"
 },
 {
 "href": "/suite-api/api/resources/7b891296-9ade-47fa-9d23-d56071fb8b3d/relationships",
 "rel": "RELATED",
 "name": "relationsOfResource"
 },
 {
 "href": "/suite-api/api/resources/7b891296-9ade-47fa-9d23-d56071fb8b3d/properties",
 "rel": "RELATED",
 "name": "propertiesOfResource"
 },
 {
 "href": "/suite-api/api/alerts?resourceId=7b891296-9ade-47fa-9d23-d56071fb8b3d",
 "rel": "RELATED",
 "name": "alertsOfResource"
 },
 {
 "href": "/suite-api/api/symptoms?resourceId=7b891296-9ade-47fa-9d23-d56071fb8b3d",
 "rel": "RELATED",
 "name": "symptomsOfResource"
 },
 {
 "href": "/suite-api/api/resources/7b891296-9ade-47fa-9d23-d56071fb8b3d/statkeys",
 "rel": "RELATED",
 "name": "statKeysOfResource"
 },
 {
 "href": "/suite-api/api/resources/7b891296-9ade-47fa-9d23-d56071fb8b3d/stats/latest",
 "rel": "RELATED",
 "name": "latestStatsOfResource"
 },
 {
 "href": "/suite-api/api/resources/7b891296-9ade-47fa-9d23-d56071fb8b3d/properties",
 "rel": "RELATED",
 "name": "latestPropertiesOfResource"
 },
 {
 "href": "/suite-api/api/credentials/",
 "rel": "RELATED",
 "name": "credentialsOfResource"
 }
 ],
 "identifier": "7b891296-9ade-47fa-9d23-d56071fb8b3d"
}

 

Parsed Response Body

We should have a new Application object now. From the response, we need to store the value of “identifier” in the response in the variable named applicationObjectId using another Postman Test. In this example applicationObjectId = 7b891296-9ade-47fa-9d23-d56071fb8b3d.

 

Start Collecting Application Object

The Application object is created in a not collecting state, so we need to start collection.

Request

PUT https://{{vrops}}/suite-api/api/resources/{{applicationObjectId}}/monitoringstate/start

There is no payload returned with this request, but you should notice that the Application object is now collecting.

 

Create Application Tier Object

Now that we have an application, we need to create an Application Tier. This request is very similar to the one we used to create the Application object. I’ve used a pre-request script again in Postman to store the application tier name, “App” in the variable named applicationTierName on the Pre-request Script tab.

Request

POST https://{{vrops}}/suite-api/api/resources/adapters/{{containerAdapterInstanceId}}

Request Body

In the request body below, “{{applicationTierName}}” represents the name of the application being created, which is automatically replaced by the value entered in the Pre-request Script tab.

 

{
 "creationTime": null,
 "resourceKey": {
 "name": "{{applicationTierName}}",
 "adapterKindKey": "Container",
 "resourceKindKey": "Tier",
 "resourceIdentifiers": [
 {
 "identifierType": {
 "name": "BS_Tier Name",
 "dataType": "STRING",
 "isPartOfUniqueness": true
 },
 "value": "{{applicationObjectId}}_{{applicationTierName}}"
 }
 ]
 },
 "resourceStatusStates": [],
 "resourceHealth": null,
 "resourceHealthValue": null,
 "dtEnabled": true,
 "monitoringInterval": 5,
 "badges": [],
 "relatedResources": [],
 "identifier": null
}

Response

 

{
 "resourceKey": {
 "name": "App",
 "adapterKindKey": "Container",
 "resourceKindKey": "Tier",
 "resourceIdentifiers": [
 {
 "identifierType": {
 "name": "BS_Tier Name",
 "dataType": "STRING",
 "isPartOfUniqueness": true
 },
 "value": "7b891296-9ade-47fa-9d23-d56071fb8b3d_App"
 }
 ]
 },
 "resourceStatusStates": [],
 "dtEnabled": true,
 "monitoringInterval": 5,
 "badges": [],
 "relatedResources": [],
 "links": [
 {
 "href": "/suite-api/api/resources/3f7f0628-6173-4a95-9103-bfc262cd9c51",
 "rel": "SELF",
 "name": "linkToSelf"
 },
 {
 "href": "/suite-api/api/resources/3f7f0628-6173-4a95-9103-bfc262cd9c51/relationships",
 "rel": "RELATED",
 "name": "relationsOfResource"
 },
 {
 "href": "/suite-api/api/resources/3f7f0628-6173-4a95-9103-bfc262cd9c51/properties",
 "rel": "RELATED",
 "name": "propertiesOfResource"
 },
 {
 "href": "/suite-api/api/alerts?resourceId=3f7f0628-6173-4a95-9103-bfc262cd9c51",
 "rel": "RELATED",
 "name": "alertsOfResource"
 },
 {
 "href": "/suite-api/api/symptoms?resourceId=3f7f0628-6173-4a95-9103-bfc262cd9c51",
 "rel": "RELATED",
 "name": "symptomsOfResource"
 },
 {
 "href": "/suite-api/api/resources/3f7f0628-6173-4a95-9103-bfc262cd9c51/statkeys",
 "rel": "RELATED",
 "name": "statKeysOfResource"
 },
 {
 "href": "/suite-api/api/resources/3f7f0628-6173-4a95-9103-bfc262cd9c51/stats/latest",
 "rel": "RELATED",
 "name": "latestStatsOfResource"
 },
 {
 "href": "/suite-api/api/resources/3f7f0628-6173-4a95-9103-bfc262cd9c51/properties",
 "rel": "RELATED",
 "name": "latestPropertiesOfResource"
 },
 {
 "href": "/suite-api/api/credentials/",
 "rel": "RELATED",
 "name": "credentialsOfResource"
 }
 ],
 "identifier": "3f7f0628-6173-4a95-9103-bfc262cd9c51"
}

Parsed Response Body

We should have a new Application Tier object now. From the response, we need to store the value of “identifier” in the response in the variable named applicationTierObjectId using another Postman Test. In this example applicationTierObjectId = 3f7f0628-6173-4a95-9103-bfc262cd9c51.

 

Start Collecting Application Tier Object

Just like with the Application object, we need to start collection for the newly created Application Tier object

Request

PUT https://{{vrops}}/suite-api/api/resources/{{applicationTierObjectId}}/monitoringstate/start

There is no payload returned with this request, but you should notice that the Application object is now collecting.

Make Application Tier Object a Child of Application Object

Now the Application Tier object needs to become a child of the Application object.

Request

POST https://{{vrops}}/suite-api/api/resources/{{applicationTierObjectId}}/relationships/parents

Request Body

 

{
 "uuids": [
 "{{applicationObjectId}}"
 ]
}

Response

There is no payload returned for this call.

 

Add Objects as Child of Application Tier Object

The last step for Application Tiers is to add objects as children of the newly created Application Tier object.

Request

POST https://{{vrops}}/suite-api/api/resources/{{applicationTierObjectId}}/relationships/children

Request Body

The request body contains an array of IDs for objects that will be added to the Application Tier.

 

{
 "uuids": [
 "832e49a8-e85d-4777-9a83-1c15b4660ec1",
 "f2c1d832-91e6-44fc-9499-3aff354c199f"
 ]
}

Response

There is no payload returned for this call.

 

Add Additional Application Tiers

If you have additional application tiers to add, go back to Create Application Tier Object to create another tier.

 

Conclusion

We’ve finally made it to the end by creating a multi-tier application within vRealize Operations using only the public REST APIs. This process can be done during deployment of VMs with Cloud Automation Service or vRealize Automation to give you the context needed to monitor applications in your environment. If you don’t have vRealize Operations today, you can download a trial of vRealize Operations 7.5 and try it out in your environment! You can also find more demos and videos on vrealize.vmware.com.

The post vRealize Operations REST API: Create Application and Application Tier Objects appeared first on VMware Cloud Management.

Integrated Compliance Helps Manage Risk

$
0
0

Compliance and risk management are often looked at as the most onerous tasks in the oversight of any large system. Having spent the better part of 25 years in a mix of operations and management roles myself, I’m not going to lie when I tell you I tend to share this view. However, with an ever-increasing share of business assets and intellectual property tied to the fate of the data center, both of these domains are unavoidable and only becoming more of a challenge to keep up with.

It hasn’t helped that so many of the processes we use for tracking information and maintaining adherence to our corporate policies has been haphazard at best, and an abject failure at worst. I’ve seen spreadsheets used to track critical addresses and passwords, policy documents which haven’t been reviewed or updated in years, and thousands of systems deployed with no way to audit adherence to policies which may or may not have been followed in the first place. As companies have become more invested in the health of the network, and more savvy to a host of risks from insider threats to nation-state hackers, compliance and risk management is as much a part of what we do today as deploying and monitoring the systems themselves has always been.

Our tools have been lacking, however, and have mostly relied on a manual audit of critical servers, virtual machines (VMs), applications, and network devices. This is a process that used to take days or longer on small networks, and has become all but impossible on modern data centers of the scale and scope of tens of thousands or more machines. But what if we could automate our compliance with risk management policies? What if we could tell the system what we need, what rules it must adhere to? What if the system could police itself, and report to us anything anomalous?

Ensure Your Compliance Posture Is Steps Ahead

VMware vRealize Operations provides capabilities to continuously measure vSphere compliance based on regulatory or internal IT standards, as well as the ability to automate drift remediation. In other words, it polices itself and, if it notices something out of standard, fixes the problem automatically. And by fixing the problem before it rises to the level of an audit finding, your compliance posture is already several steps ahead of where it would’ve been, and you haven’t had to do anything. This is how our data centers should be running. Manual intervention should be the anomalous event, not the standard.

Examples of features in vRealize Operations that support the overall goals of compliance include vSphere configuration and compliance, vSphere regulatory compliance, and automated configuration management. Working in concert, these features help data center operators provide guidance to the system that match the organization’s business requirements, and let the data center largely drive itself. Dashboards with correlations of metrics and performance, as well as adherence to policy, provide a holistic view into the health of the system, allowing for the easy ability to audit when necessary.

Adhere to Regulatory Domains and Standards with OOTB Compliance Dashboards

vSphere configuration and compliance enables security configuration for vSphere, with out-of-the-box (OOTB) cluster, host, and VM compliance dashboards. No longer will you have to wonder if your vSphere hosts are maintaining the intended security posture or drifting in a way that opens up the system to risk. With the number of hosts needed to support today’s containerized and virtualized workloads, a lack of visibility into these systems can quickly spiral into something more serious. After-the-fact remediation can be difficult at best, and at worst be covered up by bad actors who have spent time in your hosts prior to discovery.

vSphere regulatory compliance can measure compliance stature against a number of regulatory domains and standards, including: DISA, FISMA, ISO, CIS, PCI, and HIPAA. You also have the flexibility to create custom compliance standards, freeing you from waiting for someone else to provide what you need.

Minimize the Risk for Errors with Automation

Automated configuration management allows you to automate drift remediation with OOTB workflows and vRealize Orchestrator integration. You know that certain hosts, VMs, and workloads should be configured in a certain manner, and adhere to a certain set of standards. Why take a chance on someone deploying new systems that don’t match your operational intent? Fix deployments before and during deployment so that you never have to worry about having anything out of compliance. And if something happens, the system can notify you or simply fix the problem proactively.

While compliance and risk management will likely never be the thing that most IT practitioners look forward to, they are necessary. By automating the bulk of the work, you take the tedium out of the process and minimize your risk for errors. Use the artificial intelligence designed into the system to free yourself from the mundane, so you can focus your skills and talent on the bigger challenges, and drive more business IQ into the data center with vRealize Operations. More information on exactly how to configure the new vRealize Operations 7.5 Custom Compliance Templates can be found in – What’s New in vRealize Operations 7.5? A Technical Overview, Part 4.

Want to try vRealize Operations for yourself? Download a free evaluation or checkout the vRealize Operations Hands-On Lab!

The post Integrated Compliance Helps Manage Risk appeared first on VMware Cloud Management.


Tech Preview Announcement: Project Magna for vSAN Continuous Optimization

$
0
0

Here at VMworld 2019 (San Francisco, CA), we are very excited to share the first previews into Project Magna’s adaptive optimization and self-tuning engine.

 

This was first mentioned last year by VMware CEO, Pat Gelsinger, where he envisioned the true AI/ML ’Self-Driving Data Center’.  Modern day applications are dynamic in nature and requires that the underlying infrastructure be continuously, automatically and adaptively optimized to deliver the expected performance and SLA guarantees of today’s businesses.

 

Project Magna is the first instantiation of ‘Self-Driving Data Center’ vision, beginning with VMware vSAN.  This is a SaaS-based solution which uses reinforcement learning where it will collect data, learn from it, and make decisions that will automatically self-tune your infrastructure to drive greater performance and efficiencies.

 

The Magna engine runs by itself, manages itself, makes performance tuning configurations by itself and there are guard rails within the ML algorithms that will not decrease performance by any means.  It either optimizes your desired read and write KPIs or stays put.

 

Types of Artificial Intelligence

I do want to touch on the different types of Artificial Intelligence and share how Project Magna uses Reinforcement Learning to analyze your performance KPI’s and compares it against other vSAN clusters running similar application workloads.

 

You’ll typically see technologies that use Machine Learning or Deep Learning and its algorithms are dependent on using existing datapoints to correlate patterns to forecast outcomes or predict what next action steps to take.  But Reinforcement Learning combs through your data and runs thousands of scenarios that searches for the best reward output based on trial and error on the Magna SaaS analytics engine.  And this is automatically and continuously done across your vSAN clusters to ensure it’s always using the best settings to maximize throughput and minimize latency of your modern hyperconverged infrastructure.

 

Automatic Optimization

As you might already be familiar with vRealize Operations, one of the key tenants of the self-driving data center is ‘Continuous Performance Optimization’ and this is where you’ll navigate to configure Project Magna’s tunables.  You’ll be able to select all or specific vSAN clusters that you want to apply the self-tuning to.

 

Choose the optimization goal for your infrastructure and enable Magna for:

  • Read Performance
    • Reduce read latency and increase read throughput
    • Headroom will be at least 10% of host memory
  • Write Performance
    • Reduce write latency and increase write throughput
    • Read performance is maintained at a minimum
  • Balanced
    • Optimization to reduce both read and write latency based on workload requirements

 

Visualize your latest performance index readings or search for a specific time frame and compare your current KPIs, the industry averages, and the industry averages with Magna enabled.  Once you’ve enabled the Magna optimization to your vSAN clusters, it continuously adapts to the changing needs of your applications.  With a mouseover of the buttons on the graph, you’ll learn exactly what actions were taken and when.

 

In the screenshot below, we see that Magna increased vSAN cluster vc2c1 read cache size by 50gb on July 23rd for the desired KPI: latency reward.

 

 

Learn More…

I invite you to learn more about Project Magna if you’re here at VMworld.  Here are the breakout sessions that will outline the strategy, performance results, and use cases for Project Magna – so I highly recommend you checking them out!

 

  • HCI1620BU – Artificial Intelligence and Machine Learning for Hyperconverged Infrastructure
    • Monday, Aug 26th – 4-5pm
  • MLA2021BU – Realize your Self-Aware Hybrid Cloud with Machine Learning
    • Monday, Aug 26th – 5-6pm
  • HCI1650BU – Optimize vSAN performance using vRealize Operations and Reinforcement Learning
    • Wednesday, Aug 28th – 11-12pm

 

For any other questions or more information, please don’t hesitate to email us at: magna@vmware.com

 

The post Tech Preview Announcement: Project Magna for vSAN Continuous Optimization appeared first on VMware Cloud Management.

VMware Customer Analysis on AI/ML Adoption in the Modern Data Center

$
0
0

Authors:

Cameron Haight, VP and CTO, Americas

Sidd Mallannagari, Director Strategic Initiatives Cloud Management BU

 

IT environments continue to become more complex, resulting to Infrastructure and Operations teams seeking solutions to help simplify their day-to-day management efforts.  While not always guaranteed to be a panacea, the hope is that solutions incorporating AI and machine learning algorithms will provide these teams new capabilities to address the challenges.

As a company that has provided data center technology to improve IT operations for over twenty years, VMware understands that IT organizations need to balance both costs and flexibility that the business needs for its digital transformation efforts.  We also realize that the cognitive demands on IT operations teams continue to grow as a result of the unavoidable complexity that accompanies many transformation efforts.

Consequentially, VMware is committed to delivering upon a vision of providing an increasingly self-managing infrastructure – a self-driving datacenter.  Keenly aware of past efforts that had similar goals, VMware intends to deliver this highly automated functionality using a phased approach.  Ultimately, however we seek to deliver IT management capabilities that will make the datacenter self-deploying, self-securing, self-optimizing, self-healing and self-escalating.

To assist us with our vision and guide our efforts, we conducted an online survey of 122 Enterprise customers (those having 5000 employees or more) who comprise part of our Inner Circle Customer Advocacy program.  The rest of this blog will distill what we think are some of the key conclusions.

 

The first set of data points that we sought to understand was to assess the state of adoption of AI and ML capabilities to determine enterprises level of comfort in making AI essentially a member of the “team.”  12% of Enterprise customers indicated that they are currently using these technologies while 28% said that they anticipated adoption in the next 3 years.  Customers are clearly in the early phase of adoption.

However, 49% percent stated that they have no plans yet to adopt such tools. This number seemed surprising to us as there are many tools available today (including some of VMware’s current products such as vRealize Operations) that incorporate machine learning capabilities that may in fact not be obvious to the end user.  In addition, there are a wide array of suggested meanings for the terms AI and ML causing likely confusion about what capabilities are implied.  The responses regarding adoption concerns later in the survey also throw additional light on explaining this data point. Finally, there are levels of self* functionality – much like that which exists in terms of levels of autonomy for self-driving cars so we might presume that the plans for AI/ML-directed automation could vary based upon the level of sophistication – a hypothesis which we test later.

As IT operations and other support teams perform a wide variety of functions, we wanted to identify where the application of AI and ML technology would be deemed to be most useful. Respondents were given the choice of selecting multiple options. The greatest interest was in tackling the fundamental tasks of troubleshooting as well as the related jobs of performance and capacity management (which differ primarily in regards to the context of time). Security management, not surprisingly, was also ranked very high but we believe the increased urgency for added support in this area is largely due to the exponential growth in cyber threats.  The top four choices selected by more than 50% of the respondents represent areas of increasing complexity and cognitive load requiring high levels of skill representing latent demand for solutions from IT operations practitioners. While a result where every functional area was ranked high might not provide much insight to us, we were still surprised that AI/ML-based compliance and cost management ranked so low given the increasing focus on regulatory requirements as well as the always pressing need to drive down costs.

While technological innovation is designed to solve a problem, it can also bring new challenges and we sought to uncover the concerns with leveraging AI and ML for infrastructure operations.  Lack of technology maturity ranked highest and as suggested earlier, there may be a lack of clarity of where machine learning may have already been used within the organization.  Certainly, somewhat related statistical modeling capabilities have been used within IT for decades so this concern is likely related to the newer types of machine learning such as reinforcement learning which are relatively new techniques in the industry.

It was very interesting to see added complexity listed so highly (29%).  There is a lot of research in areas such as aviation, plant operations and even today’s emerging autonomous car market where the implementation of automation and AI results in complexity emerging elsewhere (in the form of, for example, humans now having to understand what the machine is doing on our behalf).  Trustability was also a concern which research suggests may be due to the underlying algorithms being increasingly perceived as a “black box” and hence indecipherable due to their growing sophistication. Both of these data points suggest that the clarity of the human-to-machine interface will loom large.

We presented a model somewhat similar to that used to describe levels of autonomy for self-driving vehicles to assess the desired level of self-managed infrastructure sophistication.  Partial automation, which we described essentially as automation responding to known events was the preferred level for enterprises with thirty-nine percent selecting.  Conditional automation in our categorization had significantly more advanced capabilities (such as real-time remediation and predictive analytics) was next at thirty-one percent.  Not surprisingly both lesser and even more advanced capabilities were not opted for and we base this largely on the areas of concern that we explored in the previous question.

Understanding the primary drivers of AI/ML-based data center automation was another critical element to understand as we seek to deliver VMware’s vision.  Three of the four most critical desires didn’t really seem to emphasize much in the way of necessary system-based sophistication.  Classic automation today can improve agility and enable the transfer of funding from KTLO (keep the lights on) to innovation projects such as digital transformation and reduce overall spend.

Perhaps most interesting is the need to address complexity while at the same time limiting the added complexity of the solution as we saw earlier.  As we identified in the introduction, the conundrum that many IT operations and infrastructure organizations face is the requirement to support new technologies and platforms for the business without it resulting in the magnification of complexity and hence costs.  Another interesting data point which comes out is that the need to implement AI and ML-based solutions is not to primarily address the lack of skills.  Candidly, this seems a bit hard to square with the need to handle complexity since one would presume that they might be related.  However, as the survey data is focused on the needs of our largest enterprise customers, skills availability might not be as acute of a problem as typically faced in smaller organizations.

 

Summary

For the large VMware customers that we surveyed, there is a strong desire to have more of a self-managing infrastructure to enable IT infrastructure and operations to be a better partner to the business.  Enterprises are expecting AI and machine learning based solutions to have fairly sophisticated capabilities. Yet there are limits to the degree of autonomy that these same organizations wish to grant such solutions. Much of this revolves around concerns of maturity and transparency as well as ensuring that new forms of complexity do not arise in response.

What this tells us is that we should ensure that the interface between an increasingly sophisticated automated system and its human partners provides the necessary degree of explainability, awareness and even indicate the degree of uncertainty in terms of its algorithmic conclusions.  Only by establishing sufficient trust will the outcome be what some researchers would attest to as a truly effective “joint cognitive system.”

Last year VMware announced Project Magna demonstrating our intention to evolve to a self-driving datacenter, where the infrastructure is continually and automatically optimized to deliver on the intended performance of dynamic applications.  Leveraging what is called reinforcement learning, Project Magna will initially focus on optimizing vSAN parameters to meet desired customer key performance indicators (KPIs).  More information on this exciting new offering, please visit:

http://www.vmware.com/go/magna

 

The post VMware Customer Analysis on AI/ML Adoption in the Modern Data Center appeared first on VMware Cloud Management.

VMware vRealize Hands on Lab T-shirt Giveaway

$
0
0

Attention VMworld 2019 Europe attendees!  Want to get the MOST out of your SDDC?  Want to learn about the latest and greatest advancements in the vRealize Suite?  Want to show off with a new Cloud Management t-shirt?  We have the answer for you!

Come to the Hands on Labs (HOL) at VMworld 2019 Europe and complete one of three VMware vRealize Lightning Labs between Monday, November 4, 2019 through Wednesday, November 6, 2019.  Finish the short lab, learn something new and get a free t-shirt (while supplies last).  It’s that easy!

 

VMware vRealize Lightning Labs included in this giveaway:

  • HOL-2001-91-ISM – What’s New in VMware vRealize Operations 8.0 – Learn about the latest major release of vRealize Operations and get some insight into the artificial intelligence (AI) and machine learning (ML) enhancements that run self-driving operations.  In this lab you will get a quick view into new features like the updated hybrid and public cloud capacity and costing, powerful application monitoring and the exciting new Troubleshooting Workbench.
  • HOL-2021-91-ISM – What’s New in vRealize Automation 8.0 – vRealize Automation 8.0, a completely new codebase, with new integrations, and many new capabilities.  vRealize Automation 8.0 is built on a modern container-based microservices architecture, with improved scalability and performance. In this lab you will work with blueprints to deploy workloads and learn how this is done across a multi-cloud environment.  Deployment and management of application and infrastructure resources across a VMware Software Defined Datacenter and public cloud services has never been this comprehensive or straightforward.
  • HOL-2002-92-CMP A Quick Introduction to vRealize Network Insight – Quickly learn the most important features of vRealize Network Insight. If you’re interested in application security, troubleshooting physical network devices, managing and securing networks in the private and public clouds then this lab is for you.

Take a quick Hands on Lab, learn about cool new vRealize features, go back home all the wiser and get some new clothes.  What could be better!

If you won’t be able to make it to VMworld Europe, take a vRealize Hands-On Lab here!

 

The post VMware vRealize Hands on Lab T-shirt Giveaway appeared first on VMware Cloud Management.

What’s New in vRealize Operations 8.0?

$
0
0

Today marks general availability of vRealize Operations 8.0. I’m excited to tell you about the amazing features available in this release. Grab a coffee or tea and sit back because this is another huge release with lots of exciting new features.

Improved Initial Onboarding

To start things off, we’ve improved initial onboarding. For new deployments, you’ll be greeted with a new initial onboarding page with some guided tours and great big suggestion to create your first Cloud Account.

Did I hear you ask what is a Cloud Account? One of the themes with this release is common constructs between vRealize Operations and vRealize Automation. A Cloud Account in vRealize Automation is the same thing as a Cloud Account in vRealize Operations. When you create a Cloud Account in vRealize Operations 8.0, you will get prompted to choose vCenter, AWS, or Azure accounts. Don’t forget that VMware Cloud on AWS is just another vCenter to vRealize Operations.

Intelligent Remediation

The first feature I want to mention in the Intelligent Remediation pillar is the new Troubleshooting Workbench. The Troubleshooting Workbench lets you use AI/ML technologies to quickly find the root cause of problems. These can be initiated from an alert or ad hoc by starting with a specific object. Objects you can troubleshoot with the workbench can be any object from any management pack.

Events displays major events and metrics that have breached the usual behavior within the selected scope and time.

Properties displays important configuration changes that occurred within the selected scope and time. Both single and multiple property changes are displayed. For multiple property changes, you can view the latest and previous changes.

Anomalous Metrics have shown drastic changes within the selected scope and time. Ranks the results based on the degree of change. The most recent anomalous metric based on a time-sliced comparison in the current time range is given the highest weightage.

Service Discovery

Service Discovery is now native in vRealize Operations. It comes with 41 known services by default and you can add your own services to discover even more. You get all of this with only VMware Tools, no agent is needed.

You can use Service Discovery to automatically build applications

The most exciting features that are available after you enable Service Discovery is the ability to execute scripts in the guest OS and the ability to list the top processes consuming resources inside the guest OS. Both of these are going to simplify the troubleshooting process by making it so you don’t need to login to a server to start triaging what’s happening inside the guest OS.

Application Monitoring

Application Monitoring now supports 20 packaged applications, with NTPD, Java, and Websphere as the newcomers. OS monitoring support has been extended to include Photon OS and Ubuntu.

If you have a need to extend application monitoring, you can now create custom script monitor to monitor anything you want using a script. The scripts are executed every 5 minutes and the result is added as a metric on the object.

Intent-Driven Continuous Performance Optimization

To discuss the new features in our first pillar of self-driving operations of Intent-Driven Continuous Performance Optimization, I need to tell you a bit more about common constructs. We now have even more common constructs with vRealize Automation 8.0, which covers Cloud Zone, Organization, User, Project, Deployment, and Blueprints.

vRealize Operations can perform workload optimization for Cloud Zones. Cloud Zones are created in vRealize Automation and the management pack creates a comparable Cloud Zone object in vRealize Operations. When integrated, vRealize Operations will be responsible for Operational Intent and vRealize Automation will be responsible for Business Intent based on policies defined on the Cloud Zone.

To round out the vRealize Automation integration, there are 4 dashboards included out of the box. These dashboards give you an overview of the environment, prices for projects and deployments, resource consumption, and top N resources.

I know I’ve been talking about vRealize Automation 8.0, but don’t fear, the integration with vRealize Automation 7.x hasn’t changed, so you can integrate either version or both versions at the same time. Stay tuned for a more in-depth blog post about the integration between vRealize Operations 8.0 and vRealize Automation 8.0.

Efficient Capacity Management

The second pillar of self-driving operations is Efficient Capacity Management. The first new feature I wanted to mention is Capacity Buffer. Capacity Buffer is for customers that want to reserve capacity for capacity planning beyond what’s already reserved by admission control settings for HA. This feature adds new metrics named “Usable Capacity after HA and Buffer”, which has a self-explanatory name, but you can see how it relates to other capacity metrics in the diagram below.

What-If Analysis

What-If Analysis has many new features of note. We now have options to plan for adding and removing VMs and hosts from vSAN clusters, which includes awareness of vSAN Storage Policies.

When modelling the addition of VMs to a vSAN cluster, you can specify swap space, host failures to tolerate, fault tolerance method, and expected deduplication ratio. The projections take advantage of the vSAN sizer to determine if the VMs will fit. Since the projections are based on vSAN sizer, you can expect the same recommendations from vRealize Operations now.

When you’re looking to deploy new VMs, have you ever wondered which datacenter would be the best option from a cost perspective? Well, now you can with the new Datacenter Comparison scenario. With this scenario, you can now see how much VMs would cost running in multiple datacenters. If the system determines that the VMs won’t fit in your desired datacenter, there is a quick link to Add Workload scenario to help you dig into the details of why the VMs won’t fit.

Assess Cost

Cost management is a critical component for managing capacity. To help improve cost reporting, you’ll now be able to specify different costs per datacenter in the Storage, License, Maintenance, Labor, Network, and Facilities cost drivers. Other cost driver improvements are support for 25, 40 and 100 GB NICs and the ability to add additional costs based on custom properties assigned to VMs in vRealize Automation 8.0.

The Cluster Base Rate calculation method is how you tell vRealize Operations to divide up cluster costs across the VMs within the cluster. What was previously known as Expected Utilization is now linked to the previously mentioned Capacity Buffer setting. If you set a Capacity Buffer on the cluster, that same setting can be used to determine the cost per GHz of CPU and GB of RAM if you select Usable Capacity after HA and Buffer mode.

We have 2 new dashboards for cost management. The first is Datacenter Cost Drivers, which helps you drill down into the costs per cost driver per datacenter to get a better understanding what drives the cost of each datacenter.

The other new dashboard is called Showback, which is intended to help you show the cost of VMs based on Custom Groups, Applications, Cloud Zones, Projects, and many more.

Last, but not least is integration with vRealize Automation 8.0 to show cost estimates at request time for on-premises vSphere VMs as well as the ongoing cost of on-premises vSphere VMs in a deployment.

Integrated Compliance

For the Integrated Compliance pillar, we now have support for monitoring compliance for vSAN and NSX-T out of the box. Compliance even supports VMware Cloud on AWS, which includes vSAN and NSX-T as well.

Platform

Do you have a need to stretch a vRealize Operations cluster across 2 fault domains? If so, I’ve got great news for you. That’s now supported with Continuous Availability. You can now stretch a cluster across 2 fault domains with a tertiary site hosting a witness node. With this new deployment model, you deploy nodes in pairs, one per fault domain. Once Continuous Availability is enabled, your cluster will be able to survive the loss of an entire fault domain.

A great way to learn more is to download a trial of vRealize Operations and try it in your own environment! You can find more demos and videos on vrealize.vmware.com. Be sure to stay tuned here as we will have even more blogs about vRealize Operations 8.0 coming soon.

The post What’s New in vRealize Operations 8.0? appeared first on VMware Cloud Management.

What-If Analysis with vRealize Operations 8.0

$
0
0

There is more to capacity management for vSphere environments than looking at what’s currently running in your clusters. Clusters are always in flux with VMs and hosts being added and removed. There can be situations that require looking at migrating workloads to other datacenters or the public cloud. If you’ve run into any of these challenges, then you’re in luck, vRealize Operations helps with all of those situations with what’s called “What-If Analysis”. I’d like to take a little time to tell you about the new features in vRealize Operations 8.0 What-If Analysis that will make your capacity planning workflow even easier.

Annual Projected Growth

The first new feature that applies to Add VM and Datacenter Compare what-if scenarios is called Annual Projected Growth. As you know, most VMs demand more resources over time, so when you’re planning for new VMs it’s nice to be able to account for that growth. Annual Projected Growth allows you to tell vRealize Operations how much you expect that VM to grow over the next year. You can enter a single percentage growth for the VM or separate percentage growth for CPU, memory, and disk space. As you can see below, by adding 20% growth, the projection over the next year reflects that 20% increase in workload.

HCI Planning

What-If analysis for HCI environments come with additional challenges because the amount of disk space required for a VM is dependent on storage policies. Storage policies let you define the number of hosts failures to tolerate, the fault tolerance method (RAID levels), and deduplication. The new Add and Remove VMs and Add and Remove HCI Nodes in vRealize Operations 8.0 now have that awareness built-in and allow you to enter the vSAN configuration when running the what-if scenario. For those of you familiar with vSAN Sizer, the vSAN Sizer code is embedded in vRealize Operations to allow it to provide accurate disk space recommendations.

This screenshot shows two what-if scenarios stacked. The first is to add vSAN nodes to the cluster and the second is to add VMs with storage policies. The stacked scenario shows the new VMs will fit (if a new host is added), and based on the projected utilization, will run out of disk space in about 140 days from now.

Datacenter Compare

When you’re looking to deploy new VMs, have you ever wondered which datacenter would be the best option from a cost perspective? Well, now you can with the new Datacenter Comparison scenario. With this scenario, you can now see how much VMs would cost running in multiple datacenters. If the system determines that the VMs won’t fit in your desired datacenter, there is a quick link to Add Workload scenario to help you dig into the details of why the VMs won’t fit.

If you want to learn more about capacity planning or cost management with vRealize Operations I have 3 sessions at VMworld next week that might be of interest to you.

  • Yes, Mr. CFO, IT Is Expensive — Understand Why with vRealize Operations [HBO1138BE]
  • It’s Not 2005 Anymore: Stop Using Excel for Capacity Management [HBO1136BE]
  • Capacity and cost optimization with vRealize Operations [MTE6044E]

A great way to learn more is to download a trial of vRealize Operations and try it in your own environment! You can find more demos and videos on vrealize.vmware.com. Be sure to stay tuned here as we will have even more blogs about vRealize Operations 8.0 coming soon.

The post What-If Analysis with vRealize Operations 8.0 appeared first on VMware Cloud Management.

Viewing all 242 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>