Quantcast
Channel: vCenter Operations – VMware Cloud Management
Viewing all 242 articles
Browse latest View live

Hybrid Cloud Assessment Launched!

$
0
0

Hybrid Cloud Assessment (HCA)

Hybrid Cloud Assessment answers your cloud cost questions in less than 3 hours. HCA helps to understand existing private cloud costs, compares public and private cloud costs and enables IT teams to confidently share information on actual costs with their lines of business. You get a report after you run the Hybrid Cloud Assessment.

 

Why take Hybrid Cloud Assessment (HCA)?

  • Gain Insight into Existing Cloud Costs: Easily understand the cost of your existing private cloud infrastructure. Quickly assess business spending across multiple public cloud accounts and providers.
  • Speed Decision Making: On-Premises or Cloud?: Make informed purchasing decisions by quickly comparing private and public cloud costs. Save time in workload run discussions using capacity comparison and ”what if” scenarios.
  • Uncover Cost-Saving Opportunities: Reduce public and private cloud deployment costs. Identify reclamation cost savings by performing HCA with VOA (vSphere Optimization assessment)
  • Share Data with Confidence: Establish IT as strategic business partner by sharing actual costs with line-of-business leaders. Quantify cloud consumption across business groups, applications, and services.

The HCA report compares the vSphere private cloud vs aws, azure public cloud costs, deep dive analysis on private cloud infrastructure costs such as VM count, average VM costs, etc., shows the actual cost across different line of businesses as showback statement and identifies reclamation cost savings oppurtunities for your private cloud.

HCA report is generated from real world customer data center information using VMware vRealize®. What’s great about HCA is that you can demonstrate the immediate value of our vRealize Business for Cloud solution so customers can make more informed buying decisions about VMware IT management solutions. HCA is a great candidate to initiate cloud journey conversations with your customers. Less than 3 hours! That’s how long a Hybrid Cloud Assessment takes.

 

Completing a Hybrid Cloud Assessment with VMware is easy. Simply submit the form and a cloud expert will contact you.

Get your HCA Report today!

The post Hybrid Cloud Assessment Launched! appeared first on VMware Cloud Management.


Aligning vRealize Operations with Business Outcomes – vROps White Paper

$
0
0

Aligning vRealize Operations with Business Outcomes White PaperVMware just published my vROps White Paper titled “Aligning vRealize Operations with Business Outcomes” on the VMware Tech Papers site. In particular, this white paper explores a common theme among many VMware customers. Specifically, customers are looking for IT reporting that closely resembles their lines of business such as services, departments, applications, and other logical business models. Furthermore, for most customers, measuring performance, utilization, and consumption at the logical business unit is more important to business leaders than at the vSphere Cluster. This is because in today’s world, services and applications can span multiple infrastructures from private, through hybrid, to public. Generally, IT reporting around business services has been difficult with traditional tools as they are very infrastructure-centric. In addition, lack of metadata or context and abstraction technologies compound the problem, making simple correlation difficult.

This vROps White Paper is your guide

This vROps White Paper is designed to serve as a comprehensive guide that ties various distinct vROps features into a cohesive solution. Therefore, the goal of this white paper is to enable IT to become more transparent and aligned with business structure. As a result, the white paper walks through extending vROps to provide business-oriented reporting that empowers business stakeholders to make better decisions by gaining visibility and insight into various business services in the datacenter or cloud. Consequently, this white paper covers the following topics in depth and connects the dots among them:

Custom Group

Define Custom Groups that align the IT infrastructure with the business structure of your organization.

Super Metrics

Turn business questions into Super Metrics that quantify the way you do business by metering service utilization, performance, and consumption.

Custom Dashboards

Present the new-found information in business consumable dashboards that aid in the decision-making processes.

Security

Provide secure access and authorize users to only view information relevant to their organizational role.

Automation

Explore various options to automate various aspects of this process or the complete solution.

In conclusion, vRealize Operations Manager does a great job at providing infrastructure analytics; however, it does not automatically provide performance, utilization, and consumption around business units, applications, and services. Leveraging this white paper, IT can use the same set of analytics they are already familiar with to answer real-world business questions and better align IT with business-oriented outcomes.

The post Aligning vRealize Operations with Business Outcomes – vROps White Paper appeared first on VMware Cloud Management.

vROps – A Methodology for Authoring Dashboards

$
0
0

vRealize Operations Manager, as we know it today, comes packed with a vast array of built-in content. We have dashboards, views, reports, alerts, and a myriad of other content to help us paint the right picture for different types of users. Of this content, the most flexible, but daunting, can be dashboards. While it is admittingly easy to drag and drop some pictures of widgets on to a dashboards, a common pitfall is not fully understanding the capabilities of widgets and how to arrange them in to the most efficient workflows. In this article, we’re going to discuss some methodologies on defining workflows, choosing widgets and building interactions to create the most effective dashboards.

Define the Dashboard’s Objective

The beginning of any good dashboard begins with the definition of a business objective. The pursuit of this objective will give the dashboard purpose, and if it works well, value. Any dashboard may present new information or simply reorganize old information to offer new meaning, but the basic idea in authoring a dashboard is to work towards satisfying a defined business objective and generate more value for the audience than previously existed. Regardless of what the objective may be, the key is to be clear, concise, and realistic in what you expect from the dashboard.

As an example, we could state a business objective of needing a better understanding of workloads that are contributing to the disk I/O load on Datastore objects.

Plan a Workflow

Once a dashboard’s objective has been defined and the desired value is understood, we’ll begin to construct the dashboard’s workflow. The most straightforward way to construct a workflow is to understand the desired end value, put a box around it, and work in reverse to see how it can be consistently created. Every workflow should be easily repeatable for different users and not assume users have pre-existing information on how to arrive at the desired end value. Working in reverse from the desired end value, identify all of the steps that may be necessary in finding this end value. Once these reverse steps lead to a logical point where you could feasibly begin the workflow from scratch, you’ve identified your starting point for the workflow. Between the start and end of the workflow, all objects, metrics, relationships and product capabilities must be taken in to consideration. Discovering this initial workflow may have been accomplished using the different product UI elements, along with user intuition and memory, but the dashboard workflow will need to be accomplished using solely the dashboard’s capabilities. Ensure the workflow leverages these dashboard capabilities and minimizes the need for users to manually take steps outside the dashboard.

Continuing our previously stated example, we could say that we know for certain that we want to end the workflow with an understanding of each Datastore object and each of the elements that contribute to it’s disk I/O. We know that Virtual Machines are likely the primary contributors of this disk I/O workload, but we cannot be certain until we take a deeper look at the data.

Note: All workflows will rely on some understanding of the data and components being analyzed, with this example requiring VMware vSphere knowledge. When creating content that is intended to help others in analyzing and understanding data, it is crucial that the content author have a sufficient understanding of the subject matter at hand, without which the dashboard may ultimately emphasize the wrong elements in the wrong way and fail to meet the defined business objective.

Get to Know Your Data

At any given point, a dashboard will be limited by the type and quality of the data it has to work with within vRealize Operations Manager. Most workflows will require a particular set of objects and metrics, connected together with relationships to represent a cohesive chain of data that results in a repeatable workflow.

Before creating content for a dashboard, a thorough discovery of the environment’s data should be completed. This includes navigating vRealize Operations Manager to discover objects and metrics that relate to the business objective or use case, and how those elements are associated to one another using relationships. Some UI areas that may be helpful for this step are the “All Metrics” and “Environment Map” tabs. These tabs allow visual exploration of the environment in a manner than may identify objects, metrics and relationships to leverage in a dashboard.

In the absence of objects, metrics or relationships that may be necessary to meet the requirements of a workflow, several workarounds may be available. These may include the following:

  • Add data sources or integrations to allow more data for analysis.
    • When objects, metrics or relationships are absent, it may be necessary to further enrich the data within vRealize Operations Manager. This may be accomplished by adding additional Management Pack(s) or custom integration(s).
  • Create Supermetrics that calculate derived values that may be missing from existing data.
    • When metric values are absent, such as aggregate computations, a Supermetric may be used to create data points necessary for dashboard analysis.
  • Leverage dashboard analysis widgets that provide analysis capabilities on-demand.
    • Some dashboard widgets offer visual analysis capabilities that are not otherwise available in the product. One example of this is the Forensics widget, which offers a density and distribution histogram with the 75th, 90th and 95th percentiles of a dataset.

In our example, we want to discover the elements related to our Datastores and what may be contributed to their disk I/O workload. We would begin by selecting a Datastore in the left navigation pane, then opening the “All Metrics” tab in the right pane. Expand the top of the metrics graph area to include the Environment Map. From this point, different objects in the hierarchy can be selected, metrics graphed, and relationships identified.

Pick Your Widgets

Once the dashboard’s workflow is defined and data elements are understood, the process of selecting graphical elements may begin. Graphical elements in a dashboard are called widgets, with each widget offering a unique set of capabilities. These widgets are used to present information to the users, allow user interaction and analysis, and ultimately form the building blocks of the dashboard.

Each widget has a unique set of capabilities and strengths, but a universal concept is that widgets can be providers, receivers, or both. Providers often have the ability to ‘self-provide’ data, meaning they have a configuration option to independently populate data without the need of user interaction or initial population. These provider widgets will be the beginning every dashboard’s workflow. Receiving widgets will often by blank or unpopulated until data is passed to them via a widget interaction. These receiving widgets will typically be used for displaying data such as graphs and providing a point where deeper analysis can happen. Receiving widgets may also be providing widgets, offering the ability to continue workflows that had been passed to then via widget interactions.

The capabilities and nuances of each widget are outside the scope of this article, but the following widget capability matrix may be beneficial in understanding the high-level provider/receiver capabilities of each widget. This information will aid in the selection of widgets for a dashboard’s workflow.

vRealize Operations Manager 6.5 - Widget Capability Matrix

vRealize Operations Manager 6.5 – Widget Capability Matrix

When selecting widgets and deciding how they may be populated, it is important to mind the aesthetics of the workflow and overall dashboard. A common mistake when creating a dashboard is to include as much detail as technically possible, using widgets to display buckets of data points and crafting Supermetrics to display every perceivable side of the data. Not only is this unnecessary, but it clutters the dashboard and becomes confusing for the audience. So for example, an author may opt to display ‘total’, ‘free’, ‘used’, ‘%-used’ and ‘%-free metrics’, whereas solely ‘%-used’ would be sufficient to identify an actionable situation.

As a rule of thumb, I recommend keeping the quantity of widgets on each dashboard below 6, occasionally using up to 10 for elaborate workflows with interactions. If data isn’t actionable or meaningful, it shouldn’t be displayed or emphasized. In short – less is more.

In our example, we’ll need the ability to navigate the associations between Datastores and Virtual Machines to accomplish the workflow needed to meet our objective. Given this requirement, we may opt to use the Object List widget, which has the ability to show “children of” and “parents of” objects that are passed to it through a widget interaction. This ability will allow us to pass a selected Datastore to an Object List and display all Virtual Machines that have a single north-bound relationship with that Datastore. In other words, this widget will let us select a Datastore and see exactly what Virtual Machines may be generating disk I/O load on it.

Plan Interactions

If we view widgets as the building blocks of as dashboard, the mortar that holds them together would be widget interactions. These interactions enable data to be passed between widgets and ultimately result in a more robust user experience than an otherwise static dashboard. While interactions aren’t appropriate for every objective or use case, they allow a deeper degree of widget analysis than would otherwise be possible with a static/hands-off dashboard.

When determining where and how interactions will be used, it is important to plan how the user will approach the dashboard and how their attention in directed to the beginning of a workflow. Placement of widgets influences the direction of user attention, with the top and top-left of a dashboard being a natural place to begin looking for a workflow. Depending on the dashboard’s audience, it may be beneficial to label widget titles as start points in the workflow, indicating “Select an object to begin”. Alternatively, a Text widget may be used to populate more detailed instructions on how to interact with and interpret a dashboard.

Interactions allow the selection of data within one widget and the automatic population of that data within one or more other widgets. These receiving widgets may show the object passed to it using that widget’s analysis capabilities, or it may show related objects (parent or child) that can be then selected and passed to yet another widget. This sequence of interaction is key to defining a successful workflow. There are no limits to how many interactions can be used, but at a certain point a dashboard will run out of browser real estate for widget analysis – mind the best practice of having at most six (6) widgets, with special workflows requiring up to ten (10). There is also an option to leverage dashboard interactions, which can pass data from a widget to a widget in a different dashboard where the workflow may continue. Select widgets also permit ‘multi-select interactions’, which allow the passing of multiple objects between widgets in the same interaction.

A major benefit to leveraging interactions is the reduction of redundant widgets and static information on dashboards. Where a static dashboard may need to be configured to display dozens of metrics to meet a use case requirement, an interactive dashboard may display a list of objects, allow an object selection, and subsequently display several key indicators for that object. The result is a dynamic and less clustered user experience, with users being empowered to view specific information instead of being overwhelmed with too much information.

While planning widget interactions for our example, we could opt to begin the workflow with an Object List that displays Datastore objects. When a Datastore is selected, a widget interaction passes objects to another Object List that is set to show “parents of” the object(s) passed to it. To continue our workflow towards our objective of understanding disk I/O on a Datastore, we may add columns to the receiving Object List to show Virtual Disk Aggregate Commands Per Second. The result is a two (2) widget workflow in a dashboard that uses a widget interaction to dynamically show all Virtual Machines on each selected Datastore, including an itemization of Virtual Machines along with their last disk I/O load. Dashboard complete!

Test Drive

Once a dashboard has taken form, the next step is taking it for a test drive to see if it actually meets the objectives you set out to address in the first place. It should be expected that iterative revisions will occur, largely because a dashboard’s perceived value is subjective and, as such, expectations and requirements may change over time.

Refinement and Maintenance

As dashboards are created and destroyed, a common theme that will prove true time and time again will be that dashboards built on dynamic structures and populations will have far more longevity than those built and maintained to statically populate data. Given this reality, Custom Groups with dynamic membership and other dynamic filtering mechanisms are the preferred means to populate providing widgets in dashboards.

Organization of content is also key to a healthy lifecycle of dashboards. Naming conventions of dashboards should be standardized within teams, as should the use of dashboard Tab Groups to organize content in to subject matter areas.

Distribution of content by means of Dashboard Sharing should be given some thought. vRealize Operations Manager allows dashboards to only have a single owning user at any given time, making content distribution controls of increased importance. A dashboard owner is the only user capable of editing that shared dashboard. The simplest way to address content management of this type is by nominating a service account to be the owner and repository of shared dashboards. In doing so, this creates a single place where dashboard can be edited and maintained for an organization.

Conclusion

With this basic methodology in authoring dashboards, any user can successfully create a dashboard that has value. This methodology was developed through my years of delivering vRealize Operations Manager in the field, and I hope that by reading this article you’ve gained a better understanding of dashboards and how to ensure they’ll be a success within your organizations.

The post vROps – A Methodology for Authoring Dashboards appeared first on VMware Cloud Management.

Automated Workload Balance

$
0
0

Welcome to your lights-out datacenter!  With vRealize Operation 6.6 automated workload balance is easier and more controllable than ever.  This latest release of vRealize Operation gives you the ability to balance workloads across clusters and datastores, simple controls to govern how much balance you want based on your business needs, three ways to activate it (automated, scheduled or manual), and a powerful dashboard from which you can view and regulate everything related to balance.

Why is workload balance so important?

Workload balance ensures all clusters have enough resources to avoid future contention.  Contention is the bane of any VI/Cloud administrator, because it means applications and users are adversely affected.  Do these questions sound like something you have faced in the past?

  • I don’t want to have a contention issue with my business critical applications.  How do I ensure that clusters don’t get filled up beyond a target utilization level that will ensure there are enough resources for all?
  • If clusters and datastores get full, how can my team build confidence to move VMs around that will guarantee that the VMs and the applications get the resources they want?
  • Budget cuts are happening, I must ensure Windows and Oracle license costs are contained. Can I limit their movement to a few set of clusters that I pay for?

Workload balance attempts to prevent hot spots by intelligently spreading VMs across clusters and datastores.  If a cluster is filled to the brim with VMs, any resource spike will cause contention in the cluster.  However, if we balance the VMs across clusters, giving each cluster has a little bit of wiggle room, they can better handle with any sudden resource needs. Think of workload balance as an insurance policy against resource contention. Automating this process is KEY because it allows the system to automatically adjust when balance is needed, which means healthier applications, which means more time for you to concentrate on more strategic work and not application babysitting.

Wait a minute doesn’t DRS do that?

Distributed Resource Scheduler (DRS) and Storage DRS work to ensure balance and fight contention at the host level where as vRealize Operations workload balance does this at the cluster level.  You can think of it like this, DRS is the best solution to automatically mitigate contention WITHIN the cluster and vRealize Operations is the best solution to automatically mitigate contention BETWEEN clusters.  Together these create one tightly integrated cohesive solution working together to ensure your applications are always getting the resources they need.

How do other niche product move VMs?

Other products that claim to deal with contention will FIGHT with DRS causing a “ping-pong” affect.  This happens when their product moves a VM into a host, but DRS quickly determines its not the RIGHT host for the workload and then moves it to an appropriate host.  This causes the VM to be moved multiple times before it is properly placed. This is a waste of overhead and if the ping-ponging gets bad enough can actually CAUSE contention.  To resolve this, these products have you turn off DRS all together leaving you without the benefits of DRS like host maintenance mode or affinity and anti-affinity rules or many others.  Its like someone giving you a new pair of gloves and then asking you to cut your hands off.  It just doesn’t make sense.

So how does automated workload balance work?

vRealize Operations workload balance runs at the datacenter level watches the CPU, memory and disk space across the different clusters therein.  When it determines clusters are out of balance it steps in to provide its recommended balance plan highlighting which VMs should be moved to which clusters.  It also provides a view of the resource utilization before and after the balance occurs so you can visualize the benefits.  Again, this entire process can be automated and scheduled as we will see later to make it hands-free.

Once you accept the balance plan, vRealize Operations workload balance begins moving the VMs to their new cluster and allows DRS/SDRS to determine the proper host/datastore within the cluster in which to place the VMs. This mean no “ping-pong” affect.  We also get the added benefit of leveraging the DRS processes that ensure VMs are running on the right hosts (e.g. HW Compatibility checks, HA & Admission control policies, Reservations, Shares and Limits).

How do you control balance?

vRealize Operations workload balance complete control over how evenly you want to spread your workloads across clusters.  It provides two simple to use “knobs” that regulate how the balance process.

The first is the Balance Workloads slider which simply states how aggressively I want to pursue balance for a given datacenter.  A setting of CONSERVATIVE means not worry about balance until one of the clusters starts to get too busy to handle the load.  This might be something you would use in a very dynamic environment.  On the other hand, the AGGRESSIVE setting tries to keep things as closely balanced as possible.  Obviously, this later means more VMs will move, but will also mean the clusters will be better equipped to deal with any recourse spikes (remember this is an insurance policy again contention).

 

The Cluster Headroom setting allows you to control “how full is full” for the clusters and datastores.  This allows you to set a percentage of free space you want kept available and ensures clusters have a resource buffer for CPU, memory and disk space for which you are comfortable.  When a cluster breaches that Cluster Headroom barrier it is a good indicator that a rebalance may be needed.

When does workload balance take action?

There are three ways to engage the workload balance process and begin moving VMs manual, automated and scheduled.

The manual method can be run directly from one of the rebalance alerts which indicate a workload balance is needed based on your control settings for Balance Workloads and Cluster Headroom above.  It can also be run from the Workload Balance dashboard which gives you a quick view of the current workload of each cluster so you can determine if a rebalance is desired.

Either way it’s simply click the REBALANCE button, review the balance plan and hit START to run it.  Easy!

But running this process manually is old-school and in today’s datacenters we need to automate as much as possible.  Workload balance in vRealize Operations can be automated on a datacenter basis by simply switching the Automate flag to “yes” in the policy.  This means you can automate it in your test datacenter and not in production or visa-versa.  This means the next time a rebalance alert is generated instead of you having to deal with it it’s sent to the automation framework of vRealize Operations and automatically remediated.

The third option is the best one yet in my opinion.  Instead of manually inspecting your datacenter balance or waiting for an alert to fire and automatically fixing it (which could happen in the middle of a work day) why not create an ongoing rebalance schedule?  vRealize Operations allows you to ensure balance across your clusters using a schedules process that can be run during your standard maintenance windows, which is when these types of actions should be taken.   In a very dynamic datacenter you may want to do this once a week or once a month in a more static environment. You should of course still use the automated method as a backup in case its needed, but with the scheduled rebalance hopefully it never will be.

Everything in one place

Finally, vRealize Operations 6.6 provides you one place to go and do everything associated with balance.  The new Workload Balance dashboard gives you the ability to view current cluster balance in each datacenter, access to the workload balance settings, the ability to set up a rebalance schedule, and the ability to run a rebalance action at any time.  It’s your one-stop-shop for automated workload balance!

 

Try vRealize Operations 6.6 today and experience how automated workload balance can change your environment to a contention free lights-out datacenter!  Stop babysitting your applications and free yourself up to start thinking about bigger more strategic activities in the workplace…or maybe take a well deserved vacation.

If you want to see it all working you can watch this Automated Workload Balance video.

The post Automated Workload Balance appeared first on VMware Cloud Management.

vRealize Operations 6.6: “I’m too sexy for my skin!”

$
0
0

vRealize Operations 6.6 gets sleeker and sexier in it’s new skin!

On June 13, many of the vRealize Suite components (namely vRealize Operations (vR Ops), vRealize Log Insight (vRLI), and vRealize Business for Cloud (vRBC) had an update GA release.  With this latest release of vR Ops a big focus has been on “simplifying the user experience” and “quicker time to value”.  We really want to simplify the lives of “Anita” the VI Admin, “Brian” the Infrastructure and Operations Admin, “Ethan” the Fire Fighter.

The slick new HTML 5 UI is based on the Clarity Design System.  Upon login to vR Ops you will see the following screen.  You will notice that in the left-hand pane we have single-click links to commonly used environment overview dashboards, including Workload Balancing, as well as bringing Log Analytics, and Business and Cost Insights all into one place – vR Ops.

 

 

See the FULL picture!

While we’re here let us first take a quick peek at bringing together Log Analytics with vR Ops.  We commonly refer to this as “360 degree troubleshooting” as you are able to troubleshoot across structured and unstructured data in one place.

 

See the BIG picture!

Secondly, take a look at how Cloud Costing and vR Ops come together.  Imagine being able to do things like capacity management or forecasting and being able to see the cost associated with these activities, or looking at reclaiming capacity and being able to associate a dollar figure to the potential cost (and resource) savings.

 

BALANCE your life, yes you “Anita”, “Brian” and “Ethan”!

WOW, how about the enhanced Workload Balancing?  Validate and modify DRS settings; Rebalance unbalanced Data Centers or Custom Data Centers, and Automate!

 

Persona-Based Content

Let’s head over to the DASHBOARD page.  Start with the “Getting Started” dashboard.  This is a Persona-Based dashboard that allows the user to look at five different categories of dashboards and very quick open any of them from there.  These categories are: Operations, Capacity & Utilization, Performance Troubleshooting, Workload Balance, and Configuration & Compliance.  Included in there are also vSAN dashboards as well as vRealize Automation (vRA) dashboards.  In this release both vSAN and vRA are natively supported, so anyone using vSAN or vRA can quickly take advantage of this native support.

 

Resolve Issues Faster

What about Alerting?  You can now slice-and-dice alerts any which way you want to help you accelerate resolving alerts and fixing issues faster.

 

Secure the Software Defined Data Center

So what about securing the Software Defined Data Center (SDDC)?  Well, that’s really important!  You can now install – out of band – PCI DSS and HIPAA compliance content for the vSphere environment.  This helps organizations with regulatory requirements improve their compliance posture.

 

Summary

vRealize Operations 6.6 has made some incredible improvements inspired by many of you out there that continue to challenge VMware and the Cloud Management Business Unit to do better!  Thank you!  Simplification; quicker time to value; persona-based dashboards and troubleshooting flows; enhanced fully automatable workload balancing; improved alerting to resolve issues quicker, and better securing the SDDC, are just scratching the surface of what vRealize Operations and vRealize Suite can help you with.  I hope you enjoy this release!

 

Download and Try vRealize Operations 6.6 here!

The post vRealize Operations 6.6: “I’m too sexy for my skin!” appeared first on VMware Cloud Management.

July 19th: Getting More Out of vRealize to Monitor Multiple Clouds

$
0
0

Be the multiple clouds hero and manage your entire IT infrastructure proactively. We are continuing our “Getting More Out of VMware vRealize” series with this special webinar on July 19th!

Multi-Clouds

How often does your team run into storage issues, slow applications, network latency or growing multi-clouds? To ensure that your infrastructure platform is available when developers need it, join this special webinar on July 19 and learn how to manage operations across physical, virtual, and multiple clouds environments so you can respond quickly to performance and management issues.

Why Should You Attend?

  • You want to do your job well, which is serving your end-users. Your goal is to ensure the IaaS platform delivers the SLA levels of performance and availability that you promise
  • You need to troubleshoot fast, and see problems before they become serious
  • If you cannot troubleshoot the root cause, at least you need to prove that it is not caused by your IaaS platform
  • Your IaaS platform is healthy, your storage, network, server, hypervisors are all doing well. However, storage is not working and it is impacting your CEO desktop. You want to see all. How each IT resource performs, and how is related to one another.

Join this special webinar “Get More Out of VMware: Improve Your Skills From Monitoring Virtual Machines to Multi-Clouds. Be the Hero!”

   Date: July 19th 2017

   Duration: 90 minutes

   Time: 9:30 am PST

   Audience: IT Admins, Operations and Capacity teams

What Will You Learn?

  • Proactive performance monitoring, alert management and predictive analytics
  • Cross-cloud visibility spanning heterogeneous on-premises and public cloud resource consumption
  • Accurate, timely cloud cost and capacity management, modeling, and forecasting
  • Automated workload balancing to proactively optimize application and infrastructure performance
  • Native operations management for hyper-converge infrastructure solutions powered by VMware VSAN
  • Moreover, learn best practices to upgrade your existing vSphere licenses as well as getting ready to manage multi-clouds

Be the hero for multi-clouds and present these new capabilities to your team. Our subject matter experts will be on hand to answer your questions during the webinar. Look forward to speaking with you soon!

The post July 19th: Getting More Out of vRealize to Monitor Multiple Clouds appeared first on VMware Cloud Management.

Introducing “Operations” Dashboards in vRealize Operations 6.6

$
0
0

Now that you have a sneak preview of the launch of vRealize Operations 6.6, it is time that we unwrap the goodies and take you through the new features in detail. One of my favorite areas of vRealize Operations Manager is Dashboards. A dashboard for me is like an empty canvas which allows you to paint the picture of what is most important to you, when it comes to managing day to day IT operations. Whether you are a help desk engineer, or a CIO, to be successful in your role, you need quick access to meaningful information. While their are numerous tools which will provide you data, the art of filtering down that data into information is what matters when it comes to decision making.

Dashboards being an empty slate, allows you to do so in a quick and efficient manner. This enhanced capability allowed us to create multiple out of the box categories matching the various persona of users in an IT organization. This resulted in a set of out-of-the-box dashboards which will help provide you a jump start into running product operations from Day 1. These dashboards are battle tested in large IT organizations and now are a part of vRealize Operations Manager 6.6.

It was important that all this valuable IP was easily accessible through a centralized console which acts as an anchor for user of vRealize Operations Manager. In order to achieve this, we introuduced a “Getting Started” dashboard which would step you through some useful categories and use cases.

 

Today we will have a look at the first category which is called Operations. Here is how operations shows up on the Getting Started Page:

 

The Operations category is most suitable for roles within an organization who require a summary of important data points to take quick decisions. This could be a member of a NOC team who wants to quickly identify issues and take actions, or executives who want a quick overview of their environments to keep a track of important KPIs.

 

Key questions these dashboards help you answer are :

  • What does the infrastructure inventory look like?
  • What is the alert volume trend in the environment?
  • Are virtual machines being served well?
  • Are there hot-spots in the datacenter I need to worry about?
  • What does the vSAN environment look like and are their optimization opportunities by migrating VMs to vSAN?

 

Let us look at each of these dashboard and I will provide a summary of what these dashboards can do for you along with a quick view of the dashboard.

Datastore Usage Overview

The Datastore Usage Dashboard is suitable for a NOC environment. The dashboard provides a quick glimpse of all the virtual machines in your environment using a heatmap. Each virtual machine is represented by a box on the heatmap. Using this dashboard, a NOC administrator can quickly identify virtual machines which are generating high IOPS since the boxes representing the virtual machine are sized by the number of IOPS they are generating.

Along with the storage demand, the color of the boxes represents the latency experienced by these virtual machines from the underlying storage. A NOC administrator can take the next steps in his investigation to find the root cause of this latency and resolve it to avoid potential performance issues.

To see this dashboard in action click here

 

Host Usage Overview

The Host Usage Dashboard is suitable for a NOC environment. The dashboard provides a quick glimpse of all the ESXi hosts in your environment using a heatmap. Using this dashboard the NOC administrator can easily find resource bottlenecks in your environment created due to high Memory Demand, Memory Consumption or CPU Demand.

Since the hosts in the heatmap are grouped by clusters, you can easily find out if you have clusters with high CPU or Memory Load. It can also help you to identify if you have ESXi hosts with the clusters which are not evenly utilized and hence an admin can trigger activities such as workload balance or enable DRS to ensure that hotspots are eliminated.

To see this dashboard in action click here

 

Operations Overview

The Operations Overview dashboard provides a high level view of objects which make up you virtual environment. It provides you an aggregate view of virtual machine growth trends across your different datacenters being monitored by vRealize Operations Manager.

The dashboard also provides a list of all your datacenters along with inventory information about how many clusters, hosts and virtual machines you are running in each of your datacenters. By selecting a particular datacenter you can zoom into the areas of availability and performance. The dashboard provides a trend of known issues in each of your datacenters based on the alerts which have triggered in the past.

Along with the overall health of your environment, the dashboard also allows you to zoom in at the Virtual Machine level and list out the top 15 virtual machines in the selected datacenter which might be contending for resources.

To see this dashboard in action click here



Optimize vSAN Deployments

The Optimize vSAN deployments dashboard is an easy way to device a migration strategy to move virtual machines from your existing storage to your newly deployed vSAN storage. The dashboard provides you with an ability to select your non vSAN datastores which might be struggling to serve the virtual machine IO demand. By selecting the VMs on a given datastore, you can easily identify the historical IO demand and latency trends of a given virtual machine.

You can then find a suitable vSAN datastore which has the space and the performance characteristics to serve the demand of this VM. With a simple move operation within vRealize Operations Manager, you can move the virtual machine from the existing non vSAN datastore to the vSAN datastore.

Once the VM is moved, you can continue to watch the utilization patterns to see how the VM is being served by vSAN.

To see this dashboard in action click here


vSAN Operations Overview

The vSAN Operations Overview Dashboard provides an aggregated view of health and performance of your vSAN clusters. While you can get a holistic view of your vSAN environment and what components make up that environment, you can also see the growth trend of virtual machines which are being served by vSAN.

The goal of this dashboard is to help understand the utilization and performance patterns for each of your vSAN clusters by simply selecting one from the provided list. VSAN properties such as Hybrid or All Flash, Dedupe & Compression or a Stretched vSAN cluster can be easily tracked through this dashboard.

Along with the current state, the dashboard also provides you a historic view of performance, utilization, growth trends and events related to vSAN.

To see this dashboard in action click here

 

Hope this post will give you insights on how each of these dashboards can help you run smoother operations and ensure that you have answer to those difficult questions at the back of you hand. Stay tuned for more information on other categories.

 

The post Introducing “Operations” Dashboards in vRealize Operations 6.6 appeared first on VMware Cloud Management.

Manage Capacity and Utilization with vRealize Operations 6.6 Dashboards

$
0
0

I hope you enjoyed my last post around running production operations with out of the box dashboards with vRealize Operations Manager 6.6. While that post was focused on providing you a visibility into your environments, with this post we will drill down into the specific topic of Capacity Management in a Cloud Environment.

While I have worked with most of the roles within an IT organization, I believe the most trivial role is of person managing Capacity of a virtual environment. This role requires one to be on their feet at all the times to ensure they are able to get answers to complex problems that revolve around capacity managment. I tend to divide these complex problems into 5 simple categories:

 

1- How much Capacity do I have?

2- How is it being utilized?

3- How much Capacity is left?

4- When will I run out of Capacity?, and        

5- When do I need to trigger the next purchase?

 

While the above questions sound simple, when you apply them to a Software Defined Datacenter, they become extremely complex. The complexity is primarily due to the fact that you are sharing physical resources using the hypervisor between multiple operating systems and applications riding on top of virtual machines. While the focus seems to be capacity, another major dimension which one needs to tak care of is Performance. The above mentioned role is also responsible for ensuring that all the virtual machines which are running in this environment are being Served Well. It is essential that the Capacity owner strikes a balance between Performance and Capacity which makes this problem harder to solve.

With vRealize Operations 6.6 we try to answer these questions with the use of out-of-the box dashboards. It was important that all this valuable IP was easily accessible through a centralized console which acts as an anchor for user of vRealize Operations Manager. In order to achieve this, we introuduced a “Getting Started” dashboard which would step you through some useful categories and use cases.

 

Today we will have a look at the second category which is called Capacity and Utilization. Here is how this category shows up on the Getting Started Page:

As mentioned before, Capacity and Utilization category caters to the teams responsible for tracking the utilization of the provisioned capacity in there virtual infrastructure. The dashboards within this category allow you to take capacity procurement decisions, reduce wastage through reclamation, and track usage trends to avoid performance problems due to capacity shortfalls.

Key questions these dashboards help you answer are :

  • How much capacity I have, how much is used and what are the usage trends for a specific vCenter, datacenter or cluster?
  • How much Disk, vCPU or Memory I can reclaim from large VMs in my environment to reduce wastage & improve performance?
  • Which clusters have the highest resource demands?
  • Which hosts are being heavily utilized and why?
  • Which datastores are running out of disk space and who are the top consumers?
  • How is the storage capacity & utilization of my vSAN environment along with savings achieved by enabling deduplication and compression?

 

Let us look at each of these dashboard and I will provide a summary of what these dashboards can do for you along with a quick view of the dashboard:

 

Capacity Overview

The Capacity Overview Dashboard provides you a summary of the total physical capacity available across all your environments being monitored by vRealize Operations Manager. The dashboard provides a summary of CPU, Memory and Storage Capacity provisioned along with the resource reclamation opportunities available in those environments.

Since capacity decisions are mostly tied to logical resource groups, this dashboard allow you to assess Capacity and Utilization at each resource group level such as vCenter, Datacenter, Custom Datacenter or vSphere Cluster. You can quickly select an object and view it’s total capacity and used capacity to understand the current capacity situation. Capacity planning requires you to have visibility into the historical trends and future forecasts, hence the trend views within the dashboard provide you this information to predict how soon you will run out of capacity.

If you plan to report the current capacity situation to others within your organization, you can simply expand the Cluster Capacity Details view on this dashboard and export this as a report for sharing purposes.

To see this dashboard in action click here

 

Capacity Reclaimable

The Capacity Reclaimable Dashboard provides you a quick view of resource reclamation opportunities within your virtual infrastructure. This dashboard is focused on improving the efficiency of your environment by reducing the wastage of resources. While this wastage is usually caused by idle or powered off virtual machines another biggest contributor to this wastage is oversized virtual machines.

This dashboard allows you to select an environment and quickly view the amount of capacity which can be reclaimed from the environment in form of reclaimable CPU, Memory and Disk Space.

You can start with the view which lists down all the virtual machines running on old snapshots or are powered off. These VMs provide you the opportunity of reclaiming storage by deleting the old snapshots on them or by deleting the unwanted virtual machines. You can take these action right from this view by using the actions framework available within vRealize Operations Manager.

The dashboard provides you recommended best practices around reclaiming CPU and Memory from large virtual machines in your environment. Since large and oversized virtual machines can increase contention between VMs, you can use the phased approach of using aggressive or conservative reclamation techniques to right size your virtual machines.

To see this dashboard in action click here

 

vSAN Capacity Overview

The vSAN Capacity Overview dashboard provides an overview of vSAN storage capacity along with savings achieved by enabling dedupe and compression across all your vSAN clusters.

The dashboard allows you to answer key questions around capacity management such as total provisioned capacity, current and historical utilization trends and future procurement requirements. You can view things like capacity remaining, time remaining and storage reclamation opportunities to take effective capacity management decisions.

The dashboard also focuses on how vSAN is using the disk capacity by showing you a distribution of utilization amongst vSAN disks. You can view these details either as an aggregate or at individual cluster level.

To see this dashboard in action click here

 

Datastore Utilization

The Datastore Utilization dashboard is a quick and easy way to identify storage provisioning and utilization patterns in a virtual infrastructure. It is a best practice to have standard datastore sizes to ensure you can easily manage storage in your virtual environments. The heatmap on this dashboard plots each and every datastore monitored by vRealize Operations Manager and groups them by clusters.

The utilization pattern of these datastores is depicted by colors, where grey represent an underutilized datastore, red represents a datastore running out of disk space and green represents an optimally used datastore.

By selecting a datastore, you can see the past utilization trend and forecasted usage. The view within the dashboard will list all the virtual machines running on the selected datastore and provide you with the opportunity to reclaim storage used by large virtual machines snapshots or powered off VMs.

You can use the vRealize Operations Manager action framework to quickly reclaim resources by deleting the snapshots or unwanted powered off VMs.

To see this dashboard in action click here

 

Cluster Utilization

The Cluster Utilization dashboard allows you to identify the vSphere clusters that are being heavily consumed from a CPU, memory, disk, and network perspective. High or unexpected resource usage on clusters may result in performance bottlenecks. Using this dashboard you can quickly identify the clusters which are struggling to keep up with the virtual machine demand.

On selecting a cluster with high CPU, Memory, Disk or Network demand, the dashboard provides you with the list of ESXi hosts that are participating in the given cluster. If you notice imbalance between how the hosts within the selected clusters are being used, you might have an opportunity to balance the hosts by moving virtual machines within the cluster.

In situations where the cluster demand has been historically chronic virtual machines should be moved out of these clusters to avoid potential performance issues using Workload Balance. If such patterns are observed on all the clusters in a given environment, it indicates that new capacity might be required to cater to the increase in demand.

To see this dashboard in action click here

 

Heavy Hitter VMs

The Heavy Hitter VMs dashboard helps you identify virtual machines which are consistently consuming high amount of resources from your virtual infrastructure. In heavily overprovisioned environments, this might create resource bottlenecks resulting in potential performance issues.

With the use of this dashboard you can easily identify the resource utilization trends of each of your vSphere clusters. Along with the utilization trends, you are also provided with a list of Virtual Machines within those clusters based on their resource demands from CPU, Memory, Disk and Network within your environment. The views also analyze the workload pattern of these VMs over the past week to identify heavy hitter VMs which might be running a sustained heavy workload (measured over a day), or bursty workloads (measure using peak demand).

You can export the list of offenders using these views and take appropriate actions to distribute this demand and reduce potential bottlenecks.

To see this dashboard in action click here

 

Host Utilization

The Host Utilization dashboard allows you to identify the hosts that are being heavily consumed from a CPU, memory, disk, and network perspective. High or unexpected resource usage on hosts may result in performance bottlenecks. Using this dashboard you can quickly identify the hosts which are struggling to keep up with the virtual machine demand. The dashboard also provides you with the list of top 10 virtual machines to easily identify the source of this unexpected demand and take appropriate actions.

Since the demand of resources fluctuates over a period of time, the dashboard allows you to look at demand patterns over the last 24 hours to identify hosts which might have a chronic history of high demand. If such cases virtual machines should be moved out of these hosts to avoid potential performance issues. If such patterns are observed on all the hosts of a given cluster, it indicates that new capacity might be required to cater to the increase in demand.

To see this dashboard in action click here

 

VM Utilization

The VM Utilization Dashboard helps the VI Administrator to capture the utilization trends of any virtual machine in their environment. The primary use case is to list down the key properties of a virtual machine and the resource utilization trends for a specific time period and share the same with the VM/Application owners.

The VM/Application owners often want to look at the resource utilization trends at specific time periods where they are expecting high load on applications. Activities like, batch jobs, backup schedules, load testing etc. could be a few examples. The application owners want to ensure that VMs are not consuming 100% of the provisioned resources during these periods as that could lead to resource contention within applications causing performance issues.

To see this dashboard in action click here

Hope this post will give you insights on how each of these dashboards can help you manage capacity and performance and ensure that you have answer to those difficult questions at the back of you hand. Stay tuned for more information on other categories.

 

The post Manage Capacity and Utilization with vRealize Operations 6.6 Dashboards appeared first on VMware Cloud Management.


Come and Participate in VMworld Design Studio!

$
0
0

Dear vRealize Operations Customers,

 

Would you like to help define the next generation of vRealize Operations? We invite you to participate in the VMware Design Studio at VMworld 2017 US.

 

What Kind of Study Is This?

In the VMware Design Studio, we will show a few key product directions and design prototypes. This is a unique opportunity to get your voice heard and interact in a small group or 1:1 session with the product teams.

 

This year there will be three topics for vROps: 1) vRealize Operations – New and Improved Capacity Management; 2) vRealize Operations – Exploring New Ways for Application Management; 3) vRealize Lifecycle Management. Customers can sign up for one or more sessions based on their time availability or interests.

 

What’s in it for YOU?

Upon completion of the session, we will offer you very cool “VMware Design Partner” swag!

 

Where and when is Design Studio?

The sessions will be held in in Mandalay Bay Convention Center – 3rd Floor – room Palm H from Monday 8/28 thru Thursday 8/31

 

To schedule a session, contact Ashley Pan apan@vmware.com. Or go to this link to sign up: www.bit.do/vrops2017

 

Plus if you want to learn about other VMware products, please check out other cool sessions here: https://uxresearchers.github.io/lv17-blog/  

 

Please act quickly as we will be able to work with just a small set of customers – thank you for your time and your interest in improving VMware products!

 

Thank you! We look forward to hearing from you!

 

Ashley Pan (apan@vmware.com)

User Experience Researcher @VMware

The post Come and Participate in VMworld Design Studio! appeared first on VMware Cloud Management.

IT Teams Need To Finally Implement Self-Service Automation

$
0
0

Seven things, including Self-Service Automation, that IT teams must do to retain relevance with end users.

I was talking with an industry analyst (think Gartner, Forrester, IDC) the other day around a broad range of trends impacting IT. Somehow we got onto a discussion around the issue of IT teams losing relevance with their line of business customers. I can’t recall the conversation exactly but it went something like this. “David, I talk with different IT teams every week and many of them ask me the same question: “What can we as IT do to be more relevant to our end users [line of business]?”.

Jim (not his real name) told me that the first question he asks these teams in response is “Are you offering your customers self-service?”. This analyst told me that the answer he hears back most often is “no, we haven’t gotten to that yet”. Jim then goes on to advise these teams to A) leverage an automated approach to service delivery to speed up resource delivery (if they are not already doing so); and B) be sure to also implement self-service that makes it drop dead easy for end users to get the services they want.

If you think about it, not implementing self-service is denying the reality that line of business partners have choices beyond enterprise IT. It also fails to recognize that increasingly our expectations of how things should work, at work, are shaped by our personal and consumer experiences. Self-service and the near instant gratification that comes from it just makes more sense today than submitting tickets and waiting weeks for resources to be available for your next critical project.

My Top “X” List For IT

This exchange got me thinking about the big-ticket items that most IT teams must tackle to be more relevant to their end users. If the # 1 thing that IT teams must do to retain or regain relevance is embrace self-service; what does a top ten list look like? Sorry to disappoint but I don’t have a top ten list. There are however some things that I feel do stand apart from the rest of the pack when it comes to looking at how IT operates. So, in that spirit here is my list of the top seven things IT must do to remain relevant.

1. Implement Self Service for Resource Requests
2. Market IT Services to your End Users
3. Enable Infrastructure as Code
4. Become an IT Developer
5. Begin to Think about Multi-Cloud Networking
6. Go Beyond Infrastructure and Deliver Complete Stacks
7. Help App Dev Teams Move Containers to Production

There are undoubtedly other things that IT teams can do that would increase their relevance to line-of-business (LOB) partners. Having said that, I do think this is a pretty good list to start with. There’s too much here to cover in a single blog so I’ll elaborate on each of these in this blog and several others that will follow. Hopefully, along the way I will provide you enough insight on each to give you a good idea of what it is that IT must do along with some additional thoughts on how to get it done.

Starting with Self Service

According to Wikipedia and depending on how you look at it, Amazon Web Services has been around since 2002 or 2006. Early adopters flocked to it because of two reasons in my opinion. The first reason was an ability to get infrastructure fast. The second reason was the ability to get these resources without having to file one or more tickets with the help desk.

Today, implementing the ability to get end users resources fast is simply a matter of automation. Many organizations have adopted automation to dramatically speed up the provisioning of infrastructure resources. Application level resources is a different matter but we’ll cover that elsewhere.

I have first-hand experience talking with many IT teams who used to take 4 or more weeks to provision resources in the past but now routinely do it in under in under thirty minutes. Of course, with Amazon you can get those resources in just a few minutes, so taking 30 minutes or so is still longer than what it would take using AWS. But let’s be honest – how many developers find out about a project and then need to be coding it 5 minutes later? Thirty minutes is plenty fast for most needs.

While many organizations have, or are in the process of adopting automation to speed up service delivery, not nearly as many have implemented self-service as part of that process. Many still rely on existing request fulfilment processes that existed before automation was implemented. The most common example of this is organizations using Service Now for requesting resources, which in turn generates a ticket to the platform automation team which then initiates an automated process to fulfill the request.

Leveraging an existing ticketing process isn’t necessarily a bad approach and there are some good reasons for doing it. The main reason that I am aware of is that this approach means that any existing process for determining who has access to specific resources doesn’t need to be re-codified into the automation that supports self-service.

That’s not a bad reason to keep the existing process, but remember that if you are an internal IT team, your competing with the public cloud and on the public cloud – self-service means self-service. No tickets and no help desk. So, going the extra mile to enable true self service where entitlements and other forms of governance are matched between users and resources might be worth it for your IT team given the world we live and compete in.

Now a few caveats around the idea of self-service. Different end users have different needs. Many end users are perfectly happy selecting resources from a pre-populated catalog. VMware vRealize Automation is a great example of an automation platform that supports this model of self-service.

In this model, blueprints can be created to represent everything from a single machine to a complex, multi-tier application, with storage, networking, security and even monitoring agents all represented in the blueprint. These blueprints then become catalog items that once selected by end user are instantiated in near real time.

Other users might prefer a self-service model that is closer to what they would experience on Amazon. This model is declarative in nature and resources are requested either through a CLI or through an API (using scripts or through another tool) in the form of small building blocks that represent infrastructure elements such as compute, storage, or network. For IT teams looking for such a model to satisfy their end users, VMware Integrated OpenStack (VIO) might be the best choice for a service delivery automation platform.

A hybrid model might be the best choice for others. In this model vRealize Automation is used to offer VM level resources from a catalog but it is also used to reserve resources for a VIO based developer cloud that an App Dev team would like to implement. In this model vRealize Automation would also be used to provision the components necessary to instantiate a VIO based Developer Cloud for that same App Dev team.

Just for completeness, I should point out that vRealize Automation can also support the idea of blueprints as code, where blueprints are created or modified using YAML. These blueprints can then be imported into vRealize Automation and offered to end users through the catalog. These same blueprints can of course be exported as YAML as well.

The Right Self-Service Model for Your End Users

Hopefully you can see that solutions to the self-service problem exist along a continuum. Figuring out what type of self-service model to implement is very much a function of understanding your users. There are different approaches and you won’t be sure which approach makes the most sense unless you are actively engaged in understanding the needs of your users.

Having a deep understanding of what your end users need is also the prerequisite for our next “must do item” which is effectively marketing what you do offer to your end users. More to come on that in the next installment of this series.

Other Blogs In This Series

  1. Marketing Internal IT To Line-Of-Business
  2. IT As Developer Of Infrastructure As Code

Learn More

• Visit VMware vRealize product page
• Visit VMware Integrated OpenStack product page
• Try our Automate IT Hands-on Labs
• Try our Intelligent Operations Hands-on Labs

The post IT Teams Need To Finally Implement Self-Service Automation appeared first on VMware Cloud Management.

Marketing Internal IT to Line-of-Business

$
0
0

Public Clouds make marketing internal IT to line-of-business a must do item

A couple of weeks ago I posted the first in a series of blogs on big things that IT teams need do to ensure that they remain relevant to their end users.  In that blog, I introduced a list of seven things that IT teams should do to be successful. I also covered the first item which was providing self-service to end users for infrastructure and application services.  Today I’ll pick up from where I left off and talk about marketing internal IT to line-of-business and the need to add marketing to the list of functions that should exist in every IT organization.

 

For reference, the list I discussed in my first blog is below:

  1. Implement Self Service for Resource Requests
  2. Market IT Services to your End Users
  3. Enable Infrastructure as Code
  4. Become an IT Developer
  5. Begin to Think about Multi-Cloud Networking
  6. Go Beyond Infrastructure and Deliver Complete Stacks
  7. Help App Dev Teams Move Containers to Production

Public Cloud Providers Want Your Companies IT Budget

Contrary to the way IT teams have operated for decades and the idea popularized by the movie “Field of Dreams “if you build it they will come” isn’t the way IT works – at least not anymore.  The main reason for this is that when it comes to the delivery of infrastructure and application services, your end users have more options than they did in the past given the rise of public cloud service providers such as AWS or Microsoft.

These public cloud providers are not just offering services.  They are spending money on marketing as well.  And that money is directed at getting the attention of your line-of-business users.   Exactly how much money they are spending is hard to say but according to this recent article in the Wall Street Journal, all companies on average spend around 7.5% of revenue on marketing.  That’s the average of a lot of companies so let’s just focus on AWS and Azure.

AWS revenues were $12.2B in 2016 but since Amazon doesn’t break out marketing spend for AWS it’s not publicly known what their marketing spend was. However, Amazon marketing spend overall was 5% of total revenue.  Thinking about AWS marketing, this percentage is probably light given that the average marketing spend for Tech Companies is around 15% of revenue (same WSJ article).  But even at 5% that’s a healthy chunk of change targeting your company’s end users. Azure revenues are about a ¼ of AWS (best estimate since Microsoft doesn’t disclose these) so less to worry about here but still more marketing dollars aimed at getting the attention of your end users.

There are many differences of course between what a third-party vendor and what an internal IT team needs to think about when it comes to marketing. First AWS and Microsoft are targeting thousands of organizations – internal IT is likely targeting one.  Second, internal IT has a bit of a “captive audience” advantage that AWS and Microsoft don’t enjoy.

Finally, if you step back and think about it, internal IT shouldn’t really be thinking of public cloud providers as competition at all since if they are doing things right these vendors are simply part of the services portfolio that the internal IT brings to the table for their line-of-business user.  But this last reason presumes that internal IT has mastered the idea of being a broker of services and I know from first-hand experience that most organizations just are not there yet.

Even given these differences, whichever way you look at things, if internal IT teams want to be successful at positioning their services in a world where public cloud providers exist, IT teams are going to have to invest some percentage of their time and money on the function of marketing.  Given this, it’s a good idea to look at what marketing is and what it means to do marketing as an internal IT provider.

Marketing and Internal IT Teams

If you Google the words “What is Marketing”, you’ll get loads of articles, videos etc.  Most explanations of marketing will eventually get around to talking about the four “P”s – Product, Place, Price and Promotion.  For our purposes, we are mostly concerned with the issue of Promotion.  Specifically, “What is the best way for internal IT to promote the IT services they offer to their end users?

 

Since the competition is to a large extent AWS and Azure, let’s look at what they do first.  Both have a web presence that makes it easy for their target buyer to understand what they offer; what problems their services solve; the process to purchase their services; and what their services cost.   There web presence is also chockablock full of other useful things like videos, data-sheets, and white papers.  So, with this as a starting point, ask yourself how good of a job does your IT Team do with its internal web presence?

I have to admit that my own company’s IT team does a really good job on this as it relates to infrastructure and application services. They identify the services they offer in very digestible language.  Language that their end users can easily understand.  They articulate what each service is and they provide the prices for the services they offer.

In some ways, they actually do a better job than AWS and Azure because they post on the site, not only the costs of the services they offer but also a cost comparison for equivalent services from AWS and Azure.  That kind of data is extremely helpful to end users that are looking to solve a specific problem but are also want to be cost effective in the choices they make as well.

Another thing that all vendors strive to do is serve up a generous portion of customer case studies. Few things are as powerful from a marketing perspective as an end user raving about how great your services are and how they helped them achieve something fantastic for their own customers.

Internal IT teams need to the same. Bonus points awarded if the IT team can figure out how to tie customer case studies directly into their company’s mission.  This way, a business leader who may not be super tech savvy can easily understand how the IT services that the internal IT team provides benefit their business strategies.

Finally, many internal IT organizations have access to some sort of companywide collaboration platform.  Here at VMware, we use a technology based on Socialcast.  Internal collaboration platforms are another great way to get the word out around the services that internal IT offers and the customers that are being successful using them.  Access to this kind of internal communication channel is something that AWS and Azure don’t enjoy so take advantage of it.

Get the Right Combination of Product, Price and Promotion

Of course, for any marketing program to be successful you have to start with products and services that add value to a buyer’s life.  This is especially true in B2B where the technique of appealing to an emotional need, so often used in consumer markets, is generally not that successful.

That is where the “Product” and “Price” Ps comes into play.  All IT teams need to have processes in place that help them identify what are the right services to offer and what is a competitive price for those services.  Many organizations have Business Relationship Managers that can play a crucial role in ensuring that the right services, at the right prices get developed but there are lots of other techniques such as surveys, focus groups, etc that can help an internal IT team figure out what services they should offer.

For an internal provider, getting the right mix across product, price and promotion is critical to achieving success.  As an example, some time back I spent a fair amount of time with a very large ISV developing Security Software.  The IT team for this company had launched a private cloud effort but had struggled to gain any real traction with their internal customers.  During the first year of operation, the team never got above 400 provisioned VMs.

The core problem that this team relayed to me was that the first set of services they launched were focused on fairly narrow use cases that represented only about 20% of the environments they normally provisioned for their end users.  This first set of services were also focused on use cases that were associated with some the most complex services they offered.

It took a bit of time but the team finally figured out that what they really needed to focus on.  They realized that they needed to focus on  offering simpler services and services that were requested over and over. Services that represented the 80% of the environments they routinely provisioned.

The switch in service offerings was a good start but addressing this issue alone wasn’t enough for their private cloud to take off. It wasn’t until the team began to aggressively market their services that things changed.

Once they added marketing to the mix of what they were doing, things accelerated rapidly and by the end of their second year of operation the number of managed VMs in their private cloud had increased nearly 10x.  Moreover, they were adding nearly as many VMs per month by this time as they had managed that entire first year.

Marketing for this team meant everything I have discussed already but it also meant doing things like hosting events, lunch and learns, and using other vehicles to get their team members out in front of their end users.  The team had also cultivated some strong champions within their lines-of-businesses that were now telling their story for them.  So not only did they have strong customer references, these same references were also now evangelizing on their behalf.

On Deck: Infrastructure as Code

In the next blog we’ll switch back to things that are more easily relatable from a technology standpoint.  We’ll look at how, as the data center becomes more and more “software-defined”, the techniques used to design and implement services look more and more like the things that application development teams do day in and day out.

It’s always important for IT teams to advance the state of the art in terms of how they deliver services and adopting practices around infrastructure as code definitely will help.  But advancing your approach to service delivery will matter little if line-of-business users don’t know what the IT organization delivers in terms of services.  That’s the problem that having a marketing function within IT helps address.

Marketing helps IT teams communicate the value that they can bring to the lives of their end users.  There is a saying that you probably have heard that goes something like this: “If a tree falls in the forest and there is no one there to hear it does it make a sound?”  From a physics perspective, of course the answer is “Yes”.  There is a sound whether anyone heard it or not.

But if you look at this question more from a metaphysical standpoint I believe the answer is “No”.  IT can offer the greatest services on earth but if their end users don’t know about them, then what is the point.  Marketing is about making sure that your end users “hear the sound of the tree”.

Marketing is Necessary but Not Sufficient

A few last words on the need to have the right services.   Even if IT has a great internal marketing machine, your IT organization will struggle if the services offered aren’t rock solid.  When it comes to infrastructure and application resource delivery, IT teams need to invest in technology like a cloud management platform (CMP) to ensure that they can quickly deliver resources to their end users.

vRealize Suite is an enterprise ready CMP that can help IT teams automate and dramatically speed up the delivery of infrastructure and application level services – across private and public cloud, across traditional and cloud native services.  It also provides enterprises with the capabilities that help IT effectively manage day two operations once services have been provisioned.  If you haven’t already embraced a CMP approach to automated delivery and ongoing management of infrastructure and applications, now is the time to do so.

Other Blogs In This Series

  1. IT Teams Need To Finally Implement Self Service Automation
  2. IT As Developer Of Infrastructure As Code

Learn More

• Visit VMware vRealize product page
• Visit VMware Integrated OpenStack product page
• Try our Automate IT Hands-on Labs
• Try our Intelligent Operations Hands-on Labs

The post Marketing Internal IT to Line-of-Business appeared first on VMware Cloud Management.

IT As Developer Of Infrastructure As Code

$
0
0

IT As Developer:  One Of The Keys To Relevance

This blog is the third installment in a series focused on the question of what IT teams need to do to retain or regain relevance (depending on their circumstance) with line-of-business.  For the full list check out my first blog  on this subject.  In addition to this list I also covered self-service in blog #1.  The second blog  looked at the need for IT to effectively market their services to their internal customers.  This blog is focused on two of items on the list.  What is “Infrastructure as Code” (IaC) and how does this relate to the concept of IT as Developer.  It is also about the need for IT to embrace DevOps principles along with Continuous Integration / Continuous Delivery practices for IT Code in order to help accelerate application development initiatives at their companies.

Infrastructure As Code

Infrastructure As Code

 

The Forrester Research papers below provide some great additional background on many of the key concepts that will be covered in this blog.  The first paper is all about IaC.  The second paper by Forrester focuses on the need for System Admins (term used broadly here) to transition to IT Developers.  You should also check out a blog  by my colleague Colby Heiner  which looks at the topic of the Software-Defined Data Center (SDDC) as Code.

DevOps, Continuous Integration, Continuous Delivery

It’s hard to appreciate the idea of IT as Developer without first considering how application development has changed since the advent of the DevOps  movement.  For many years now App Dev teams have been pushing hard on the idea of becoming more agile.  A big part of becoming more agile is the adoption of DevOps principles.

From my standpoint, the goal of DevOps is to empower teams that include developers and operations professionals to create together a new operating model that can produce higher quality code and more frequent releases.  DevOps is first and foremost about people and process.  Technology comes later and is something that supports people and process.

Continuous Integration  and Continuous Delivery  (CI/CD) is a practice that supports making this idea a reality. CI/CD is about integrating the developer tool chain and automating the development pipeline so that every step of the development process is integrated and movement across steps in the development pipeline are fully automated.  Through this approach, code moves from development, to test and then to production in a rapid but controlled fashion.  The adoption of CI/CD practices allows organizations to push code to production rapidly but with a very high confidence that the code will function as expected.

Infrastructure as Code and the SDDC

The core idea behind a software-defined data center (SDDC)  is that all the physical resources that make up the data center can be abstracted through software.  “Infrastructure as Code” (IaC) is another way that people talk about the same idea.  Turning a physical data center into software makes it infinitely easier to quickly compose and then roll out environments based on software defined building blocks of compute, storage, and network.

IaC came into vogue with the ascension of AWS .  With IaC developers could request infrastructure in a declarative fashion, using a building block approach that allowed them to completely specify what kind of environment they needed for a particular project.  IaC not only supported self-service but it was also programmable.  This meant that other technologies could orchestrate the creation of resources using the API of the automation platform that was responsible for building out the requested environment. Being able to compose infrastructure by calling it through an API meant that IaC could easily be leveraged as part of CI/CD initiatives.

But the implications of IaC extend well beyond being able to easily request resources programmatically.  One of the Forrester Research papers cited at the beginning (Lead The I&O Software Revolution…) summarized these implications well: “An SDDC treats infrastructure in the same manner that an application developer treats the application – it’s all code.”  So, while IaC supports the ability of organizations to implement CI/CD for application code, it also presents an opportunity for IT to manage the SDDC as if it were an application itself.

DevOps Principles and CI/CD Applied to the SDDC

Since the data center is just code, IT can now apply the principles of CI/CD to the development of code that represents infrastructure.  IT can also apply the same principles to software that does not represent physical hardware but instead represents processes associated with managing that infrastructure (or the application that rides on that infrastructure).  An example of something beyond infrastructure could be a monitoring solution that helps ensure performance and availability of a specific environment or application.

Closely linked to the idea of creating a development pipeline for the code that represents the data center is the notion of “immutable infrastructure”.   The core idea behind immutable infrastructure is that infrastructure changes should always be done as part of a development pipeline process.  Just like with application level code, IT needs to make sure that any changes that are introduced are fully tested before being put into production.  So rather than changing configurations of already provisioned environments, adopting DevOps principles would mean that IT instead a) makes changes to the source code that represents an environment; b) thoroughly test changes to that source code in an appropriate test environment; and then c) rerolls the environment with the changes implemented.

This approach is consistent with how an application developer would push out application level code changes if they were using DevOps principles supported by CI/CD practices.  The developer would a) create new code that extends or changes the functionality of an existing application; b) then they would fully test the change as part of the entire application (not just test the code change in isolation) and finally c) they would reroll the entire application.   The comprehensiveness of this kind of process along with the need to do it relatively frequently dictates the need for some sort of automated development pipeline to ensure that things happen exactly as expected and that the process can easily be rerun as often as necessary to push new code to production.

With so much of the data center now software defined there is no shortage of targets where IT could begin to apply DevOps principles and CI/CD practices to IT Code.  Configuration files, free form scripts written in Perl, Python or any number of languages; scripts associated with Configuration Management solutions like Puppet, Chef, Ansible, Salt; workflows associated with any number of orchestration solutions.  All of these are examples of IT Code where CI/CD practices could be extremely useful.

Files that are created or edited as part of an IT solution in the data center are also IT Code.  An example in this category would be files associated with an application monitoring solutions.  The files involved represent artifacts like custom dashboards; custom reports; or perhaps user created scripts that drive alerts for specific scenarios that are not supported out-of-the-box.  If the files change frequently or if the results of a misapplied change would generate considerable re-work, then they also would be good candidates for use with CI/CD practices.

Getting Started

As mentioned at the start of this blog, app dev teams are well down the path of embracing DevOps principles and CI/CD practices.  It’s the best way to produce high quality code rapidly.  IaC is now a natural part of that process and ensuring that the environments that get wrapped into these CI/CD processes perform well is a key part of what it means for IT to be a more effective partner to the lines of business.  However, with the velocity of application delivery dramatically increasing due to the adoption of DevOps principles it is becoming increasingly difficult for IT teams to keep up without adopting the same approach.   But where to start?

Code Stream Pipeline for vRealize Automation Blueprints

Code Stream Pipeline for vRealize Automation Blueprints

 

If you are a vRealize user there is a very easy on ramp.  The vRealize Code Stream Management Pack for IT DevOps  is solution that makes it easy to embrace the use of CI/CD practices for vRealize and other VMware artifacts.  The solution combines vRealize Code Stream  along with a management pack that delivers pre-created pipelines designed to manage vRealize artifacts.  Covered artifacts include the following:

  • vRealize Automation: Blueprints, software build profiles, property definitions, groups and actions
  • vRealize Orchestrator: Workflows, actions, configuration elements, and packages
  • vRealize Operations: Custom dashboards, reports and alerts
  • vSphere: Template and custom specifications
  • vRealize Code Stream: Pipelines

My colleague Greg Kullberg  did a couple of great YouTube videos that show the management pack in action.  The first video  shows the management pack being used to manage vRealize Automation artifacts.  The second video  video shows the management pack being used to manage vRealize Operations artifacts.

The management pack along with vRealize Code Stream are part of the entitlement associated with vRealize Suite and vRealize Automation Advanced and Enterprise editions.  The use of Code Stream included as part of that entitlement is limited to the management of IT artifacts associated with the VMware products mentioned above but it is possible to purchase additional licenses to manage code beyond VMware artifacts.

Other Blogs In This Series

  1. IT Teams Need To Finally Implement Self-Service Automation
  2. Marketing Internal IT to Line-of-Business

Learn More

The post IT As Developer Of Infrastructure As Code appeared first on VMware Cloud Management.

Realizing the Self-Healing Data Center: Artificial Intelligence’s Role in Automation

$
0
0

There’s an old joke that IT is about keeping the lights on. Have you ventured into The Home Depot lately? Lighting isn’t what it used to be! With a diversity of choices—LED, CFL, Halogen, Fluorescent and, for now, the old incandescent bulb Thomas Edison would recognize, the simplicity of yesteryear is gone. The same could be said for the data center, only in our industry, it’s yesterday—or even yesterminute!

With rising application diversity, public and hybrid clouds, modern architectures and “function as a service,” keeping the lights on is the least of it. Today managing workloads for optimal resource consumption and user experience—to keep the whole business on—requires automation. The more complex it all gets, the more critical it becomes to keep it simple through a single point of control and management.

I wrote recently about the self-healing data center and the role of automation in remediating problems, to the point of “healing” downed services before they require escalation.  The foundation of any self-healing data center is the management control plane. vRealize Operations 6.x brought automated remediation and predictive resource scheduling into reality. Today I’d like to look ahead what “v.2” of the self-healing data center from VMware could look like.

The next generation of the self-healing data center stool will have these 3 legs: artificial intelligence, strategic partnerships with public cloud vendors to ensure tight, seamless interactivity, and a state of the art management backbone. VMware’s overarching principle remains the same: to give customers flexibility and choice while delivering a single point of management for any cloud.

Automation has long been key to doing more with less

As IT focuses more in innovation, offloading as much of the routine administration of the data center and cloud seems like an obvious choice. But the bar keeps moving up; most companies today are on their journey to the cloud, so much so that it’s changing the role of the data center to that of a hub for cloud operations. As with all challenges this brings opportunity: the self-healing data center is becoming a reality.

VMware’s relationship with Amazon Web Services, announced last year, allows public and private clouds to co-exist on the same infrastructure. This provides incredible efficiency and resource consolidation while offering flexibility in where to run your workloads. VMware Cloud on AWS provides a single point of visibility and control to manage your clouds. A big step in itself, this partnership also helps VMware more fully realize the self-healing data center as it builds out delivery of tools to automate dynamic workload shifting.

Artificial Intelligence is the next piece

First, what do we mean by the term Artificial Intelligence, or AI? People think of AI as allowing machines to make decisions for people—but at VMware we think of AI as powerful enabling technology that helps people make better decisions (see Ray O’Farrell’s related piece here).

VMware wants to use AI to help our customers make sense of big data: AI will help to surface what you need to know when you need to know it. AI that can “connect the dots” will become an increasingly important aspect of our cloud management platform and the automation that helps our customers drive digital transformation. Our goal, over time, is to make cloud infrastructure itself more intelligent. This means that AI-enabled infrastructure could deliver degrees of self-regulation well beyond today’s policy-based capabilities: power usage, elasticity, redundancy and security are among the critical data center functions that, like capacity planning, can be effectively lifted from the IT admin’s plate. The term adaptive infrastructure is emerging to describe this almost sentient state, and intelligent reporting can drive a human-machine relationship that approaches collaboration.

VMware’s AI initiatives are growing up, gaining resource and momentum internally.  Teams are charged with innovation in the AI space and granted the freedom to run. The guiding principle for pursuing AI at VMware is simply to help customers solve complex problems more quickly. vRealize Log Insight was the first fruit, using big data and machine learning to deliver operational intelligence. We’ve also been working to leverage advanced analytics across software-defined infrastructure to improve resource allocation and hybrid cloud operations.

The next generation of cloud automation will assume AI as part of its DNA

Expect to see more “function-as-a-service” offerings as that model matures, as well as the incorporation of next-gen micro-services and distributed state architectures for container deployments and serverless environments respectively. VMware’s Mike Wookey, VMware Cloud Management CTO, did a spotlight session at VMworld on the role of AI in these roadmap features; for more on that go here.

The big takeaway here is: VMware is delivering the self-healing data center in a way that keeps you in control. Our extensible cloud management platform will continue to adapt to your existing cloud strategy today and tomorrow. Hybrid cloud is a reality and we continue our significant investment in automation because we believe it is the critical bridge between cloud and data center. vRealize Operations can handle it all, and thereby protect your investment in the Software-Defined Data Center, but if third party tools are a reality in your shop, you can keep those constituents happy. One of our customers at VMworld recently put it this way:

“vRealize Automation (Orchestrator) allows you to connect to a huge ecosystem with a huge number of third-party systems to automate any and every IT process that you can think of. It makes it very flexible and adaptable.”

VMware’s guiding mantra remains the same: let you do more with less, automate everything, and provide a comprehensive multi-cloud management platform that eliminates the need for point solutions all on an open platform that reduces your risk and cost.  That lets you focus on innovation for your business, deliver brand-making customer experiences and advance your journey to the cloud.

The post Realizing the Self-Healing Data Center: Artificial Intelligence’s Role in Automation appeared first on VMware Cloud Management.

IT Must Integrate PaaS and CaaS with IaaS

$
0
0

Go Beyond IaaS: Deliver PaaS and CaaS

This blog, is my final installment in my “Seven Things That IT Must Do…” series (see list of prior blogs at the bottom of this blog) and focuses on the idea that IT must Integrate IaaS with PaaS and CaaS (Platform as a Service and Container as a Service) as part of their Cloud service delivery strategy.  While most IT organizations are just now becoming proficient at delivering IaaS, what is abundantly clear to anyone in the industry, is that developers don’t care about infrastructure.  What they care about is writing good code that helps their organizations address their core mission, whatever that mission is. The mission of IT as it relates to developers is to help them accomplish their mission.

Delivering “frictionless IaaS”, IaaS that provides a public cloud like experience, whether on premise or on the Public Cloud, is a good start but more and more, IT teams need to be looking beyond this in terms of how they support App Dev initiatives.  IT teams supporting developers, especially developers who are embracing DevOps principles, need to start thinking about how they can provide some version of PaaS to their developers.  I say some version of the PaaS because PaaS has become one of those things that is all things to all people.

For some App Dev teams, PaaS means having access to a platform that delivers traditional middleware components on demand.  For other groups, it is a platform that delivers containers on demand (commonly called Containers as a Service).  For others, it is about having access to a full-fledged developer platform that includes developer tools that help them develop either traditional apps or cloud native apps.  The truth is that there is a glut of diversity that IT teams need to contend with as they begin to move beyond IaaS to provide higher levels of value to the App Dev teams they support.

Automating Delivery of 2nd Gen Apps

When it comes to automating the delivery of the middleware necessary to support application development, many development teams are using configuration management tools such as Puppet, Ansible, Chef and Salt.  These configuration management tools make it easy to codify the process of building the middleware stacks that developers routinely integrate into their software projects.   Another benefit of using configuration management tools is being able to use these tools to audit already provisioned configurations and to remediate configurations that deviate from approved standards.

Because of these benefits, many organizations now have large investments in the codified representations of the middleware stacks they use most often.   To support App Dev teams looking to simplify the building of combined infrastructure and application stacks, IT organizations should consider integrating configuration management tools with whatever capabilities they have deployed for automating the delivery of IaaS.

Integrating an existing IaaS solution with one or more existing configuration management tools is the easiest and most natural way to deliver more complete software stacks to end users.  The alternative approach to develop more complete software stack representations generally requires rewriting lots of code associated with provisioning the middleware most commonly used by the enterprise.  App Dev teams are loathed to cooperate in such efforts if they have already created tons of these representations using configuration management tools.  So if you want to extend IaaS to include middleware, figuring out how to leverage existing representations that are already sitting in configuration files from solutions like Puppet, Ansible, Chef and Salt is going to be your best path forward.

Automating Delivery of 3rd Gen Apps

Containers are becoming the predominant choice for organizations creating net new applications. While containers can run on bare metal, most of the marketplace is leveraging containers on virtualized compute environments such as vSphere.  Containers allow organizations to build applications in a way that makes them highly portable across cloud environments.   Containers are also a great fit for applications that must be updated frequently.

While not new, containers have gained tremendous popularity in the last few years’ thanks in part to the ease of creating them using Docker. The popularity of containers in turn has given rise to many container-related technologies, many of these open source, many leveraging Kubernetes, that can be easily downloaded by developers to their laptop.

While there are dozens of tools targeting the use of containers for development, tools that support the scaling up of container-based operations and moving containers to production are not as easy to find.   Most of the containers solutions on the market today exhibit some level of deficiency when it comes to addressing production level requirements for things such as performance, availability, security, or data persistence.  Most solutions also fail to address in a enterprise grade way requirements related to things like role based access, resource entitlements, approvals, or other governance related items.

Another deficit many of these solutions exhibit is in the area of integrating container based application components with traditional, 2nd generation applications.   Turns out most enterprises think this capability is pretty important.  My team recently did a market research project around containers and one of the core findings was that a majority of respondents believe their organizations will have a need to run applications that are a mix of both traditional and container based applications in the future.  Helping App Dev teams address these deficiencies so they can successfully move container based applications to production is something that IT teams will need to master in the future to maintain their relevance – especially with groups focused on cloud native, container based applications.

Integrating 2nd and 3rd Gen Apps with CMPs Doing IaaS

Many IT organizations are using some form of Cloud Management Platform (CMP) to deliver IaaS.  In some cases this CMP is only used for services delivered on premise.  In other cases, the organization is using their CMP to provision across both an on-premise environment and a public cloud.   However your CMP is being used, from an IT relevance perspective, the cloud management solution your organization is using needs to be able to move up the capability ladder to handle the inclusion of both 2nd gen and 3rd gen application level services.

Given the diversity of solutions available for provisioning both 2nd and 3rd gen applications, a CMP needs to have an ability to “snap in” existing solutions.  CMPs must provide a framework that makes it easy to extend beyond IaaS so that IT teams can snap in configuration management tools or container as a service solutions.  Your CMP should make it easy to take advantage of the investment the App Dev teams have already made in solutions to provision both 2nd and 3rd gen applications.

Summing Up

The approach outlined above gives App Dev teams the ability to use the solutions of their choice for provisioning application level components.  This will make them happy and help move the perception of IT out of the “always NO” camp and into the “they can do anything” camp.  But it will also allow IT to easily move applications from developer laptop into production in a way that meets enterprise grade requirements.   That’s a win/win for everyone and a one of those “must do” things that helps IT to maintain its relevance in a fast-changing world.

The Blog Series Recap

  1. IT Teams Need To Finally Implement Self-Service Automation
  2. Marketing Internal IT to Line-of-Business
  3. IT As Developer Of Infrastructure As Code
  4. IT Must Adopt DevOps with SDN To Drive Relevance

Learn More

The post IT Must Integrate PaaS and CaaS with IaaS appeared first on VMware Cloud Management.

Announcing: Wavefront and vRealize Operations Integration Coming Soon!

$
0
0

Wavefront and vRealize Operations: Empowering IT to partner with Application teams through shared visibility

Today VMware announced the integration of Wavefront and vRealize Operations, empowering IT to partner with lines of business and application owners by providing rapid onboarding of Wavefront, application discovery, agent lifecycle management and control, and shared visibility across infrastructure and applications.

 

Sign-up here to Notify Me When Integration GA’s.

The post Announcing: Wavefront and vRealize Operations Integration Coming Soon! appeared first on VMware Cloud Management.


What’s new in vRealize Operations 6.7 Capacity and Costing! What! Did you also say Costing???

$
0
0

There are some great new capabilities and enhancements in vRealize Operations 6.7 (vROps).  In this blog we would like to give you an overview of two really big changes to vRealize Operations 6.7.

 

New Capacity Analytics Engine

Private Cloud Costing

Simplify, simplify, simplify!  This has been one of the big drivers for the new capacity analytics engine in vRealize Operations 6.7 in addition to many others like Quick Time to Value.

There are many new features, including:

 

  • Real-time predictive capacity analytics.  Capacity updates are available immediately after changes occur, for example: if you are in the process of reclaiming capacity, right after you are done, you can go back and assess capacity and see the reflected updates!
  • Forward-looking forecast that includes both an upper and a lower confidence band.  vRealize Operations leverages historical data and statistical modelling of demand behavior to forecast capacity with high confidence.
  • Simplified settings by defining and applying your organizations “business Intent”.  Organizations want things to be simple, for example “here are my business requirements at ACME Corp. Now vROps, tell me what I need to know and what I need to do”.
  • Improved capacity accuracy for Time Remaining, Capacity Remaining, and Right-Sizing used for Capacity Planning use cases, including Workload Optimization.  There are many workload optimization use cases like Initial Placement, Balancing, Densification and Consolidation and you need accurate and real-time capacity analytics to feed these use cases.
  • Plan capacity for future projects and changes.  Run a What-If capacity scenario to add workloads, and see where it would fit, but also see how much it would cost you in both Private and Public Cloud.
  • Integrated costing with capacity.  Chances are you’ve heard me say this before “If you can’t measure it, how can you manage it?”  The trend has been that more and more organizations NEED to see cost data alongside the capacity and infrastructure data.  Assessing the cost of running the environment, and discovering the cost savings opportunities is now native to vRealize Operations.

 

Lets walk through a few vRealize Operations Capacity workflows to show you some of these capabilities.  Assess Capacity, Reclaim Capacity, Plan Capacity, and Assess Costs.

 

The image below is from the new Quick Start page of vRealize Operations.  This area focuses on Capacity and Private Cloud Costing.

 

Assess Capacity

 

Assess the capacity of each of your Datacenters.  View them sorted by criticality, and leverage the recommendations to reclaim capacity and/or perform Workload Optimization actions and/or purchase hardware.  View the Datacenter Clusters utilization and notice that the selected cluster is out of CPU resources (0 days remaining)

 

Reclaim Capacity

Before thinking about purchasing new hardware, it’s best to look at capacity reclamation opportunities.  vRealize Operations can help you do just that.  View each of your datacenters and the associated capacity reclamation opportunities as well as the cost savings opportunities that go along with that.  Right from this page, view the VM reclamation categories such as Powered Off, Idle, Snapshots, and Oversized VMs, and take action right here, right now!  It’s also possible for you to exclude VMs from consideration here if you wish.

 

Plan Capacity

The business never stops!  More and more workloads are coming online all the time.  Let’s plan for these added workloads.  Provide the workload’s desired configuration, the target datacenter, then run the scenario.   Oh ya, and if you want to use existing VM configuration in the scenario, you can do that too.  The scenario will run, and report on the datacenter and where the workloads will fit.  It will also provide the associated cost.  You can change the target Datacenters and clusters in context and re-run the scenario.  You will also get a comparison of how much it will cost in VMware Cloud on AWS and in native AWS.  This allows you to make data-driven decisions for new workloads regardless of wether you place them in the Private Cloud or the Public Cloud.

 

Assess Costs

Costing is widely available throughout the areas we’ve discussed so far.  Although there are Cost Drivers and a Cost Reference Database out of the box, you have the ability to modify and update these to more accurately reflect the costs in your environment.  For example, we may have a Dell Host Server priced at x Dollars, but with a discount on the hardware.  This cost adjustment is easy to make.  The point is, YOU have the POWER to update and influence the cost numbers to be even more accurate for you!

Below, we can see the inventory of the private cloud environment, and we can see the Total Cost of Ownership.  We can see the Cluster and Datastore costs in each of the Datacenters.  We can also see the most and least expensive clusters.

 

 

Summary

vRealize Operations 6.7 has made some incredible improvements inspired by many of you who continue to challenge VMware and the Cloud Management Business Unit to do better!  Thank you!  The new capacity engine and the integrated costing are just scratching the surface of what vRealize Operations and vRealize Suite can help you with.  I hope you enjoy this release!

 

Download and Try vRealize Operations here!

 

For more technical resources please visit the VMware vRealize Suite Technical Guides site and leverage the following tiles:

 

 

This blog is published in collaboration with Chima Njaka, Group Product Line Manager at VMware Inc.

The post What’s new in vRealize Operations 6.7 Capacity and Costing! What! Did you also say Costing??? appeared first on VMware Cloud Management.

Workload Optimization – The Key to your Self-Driving Datacenter!

$
0
0

vRealize Operations 6.7 is here and it comes loaded with great new features that will make your life better!  Included in these is an improved capacity engine to help you better manage your environment, integrated cost analysis so you know how much your environment actually costs and where you can save $, and an integration with Wavefront to help you quickly and easily onboard new applications for monitoring.  While all of these are great, in my humble opinion, the BIGGEST addition is the new Workload Optimization functionality, flows and views.

As the titles says, Workload Optimization is THE main component of your self-driving datacenter.  The idea behind a self-driving datacenter is you just define your business and operational intent and then “take your hands off the wheel” and let vRealize Operations drive.  It will monitor the environment and, when the datacenter deviates from its optimal state, it will quickly optimize it back to a desired state all while honoring your intent.  Simple and hassle free!

Workload optimization works closely with DRS to ensure applications have the resources they need.  Under its watch, VMs will be moved to other clusters to meet your performance, operational and business intent you have defined.  Together with Predictive DRS it provides continuous performance optimization for your datacenters.

Assessing Your Optimization Status

The status of how well your datacenters are optimized can be quickly viewed from the Quick Start page.  The Quick Start page is the first thing you see when you log into vRealize Operations.  It provides you a fast view into your optimization, capacity, problems and compliance.  For Workload Optimization the first column is what we are interested in, it shows six (6) Datacenters require optimization.  You can view the details by clicking on the Workload Optimization link.

This jumps you to the Workload Optimization screen which shows you a list of your datacenters across the top as well as their optimization status, current capacity remaining and potential costs savings opportunities.  This “datacenter header” shows datacenters in need of optimization for performance, operational or business reasons as “Not Optimized” and are moved to the front of the list.

Selecting one will allow you to focus your efforts on that datacenter and will update the rest of the UI accordingly.  Most importantly it provides you an Optimization Recommendation telling you what you need to do to bring this datacenter back to an optimized state.

Define business or operational intent

Before you we go too much further and actually optimize this datacenter we really need to talk about what we mean by operational and business intent and how you can set it using a few simple knobs.  Examples of intents can include one or more of these examples:

– Assure the best application performance

– Save money through license enforcement

– Meet compliance goals

– Drive infrastructure costs as low as possible

– Implement SLA tiering

– And more!

The first thing you need to determine is your target utilization objective for the datacenter.  If application performance is your top concern then you spread workloads evenly over the available resources by choosing Balance.  If instead you are looking to place workloads into as few clusters as possible, lower your cost per VM and possibly repurpose some hosts chose Consolidate.

Next you need to configure the percentage of headroom.  Headroom allows you to choose how much risk is acceptable in a cluster.  It provides a percent buffer of CPU, memory and disk space and reduces the risk from bursts or unexpected demand spikes.  In a production environment it is not uncommon to have a 20% Headroom buffer.

Finally, you need to determine how your business needs should drive the actual VM placement.  vRealize Operations does this by leveraging and honoring vCenter tags.  With vCenter tags you can define what VMs should be placed on what cluster.  Simply tag the cluster(s) and the VM with the same tag and viola…Workload Optimization will make sure that workload is placed on that cluster(s).  One of the most common use cases for this feature is license enforcement as it will make sure all your Microsoft, Oracle or Linux VMs remain on specific clusters for license cost purposes which as we all know will lower your license costs!

Remember, by using vRealize policies you can set your intent differently for every datacenter if you so desire.  That means you can push for Balance and License Enforcement in your production clusters and Consolidation and cost savings in your test environment.

Automate Workload Optimization and Turn on Self Driving

Once you have defined your intent you really don’t need to go into the settings again, set it and leave it alone.  With the settings done its time to set up our self-driving operations with Workload Optimization.  Workload Optimization recommendations can be run in one of 3 ways.

1 – It can be run manually with the optimize now button directly from the Workload Optimization screen. This is a fine way to bring the datacenter back to an optimized state every once in a while, as needed, but it’s not really automated or self-driving.  It still requires you to log in and look at the UI each day (or hour!) and see if an optimization is needed.  That’s very 1980’s!  Yawn!

2 – Workload Optimization can be scheduled to run on an ongoing basis during your maintenance windows (e.g. every Sunday morning at 2am). This “self-driving” approach will work to keep your datacenters optimized and will minimize any disruption to your application owners.  I recommend this to every customer I speak to.

3 – But there is another self-driving option that should be engaged for complete coverage, and that is to automate a Workload Optimization from the alert. While the ongoing scheduled optimization noted above should keep things nice and optimized, there will be times where your datacenter may become very un-optimized in the middle of the week.  You know, when Bill from the development team decides he is going to start up all of his test web servers at the same time and run some stress tests without telling you!  In those situations, you won’t want to wait till Sunday morning to optimize…you need to do it immediately.  vRealize Operations will catch this situation, create an alert to let you know an optimization is needed and if that alert is automated, the Workload Optimization will run and bring the datacenters back to an optimized state.

Visit the New Website for Technical Details

I have created a video to better illustrate the Workload Optimization solution and provides a nice overview.

You can also view several other videos for vRealize Operations 6.7 on our new website VMware vRealize Suite Technical Guides which is focused on technical resources you can use to get the most out of your vRealize Operations experience.

Summary

vRealize Operations 6.7 has made some incredible improvements inspired by many of you who continue to challenge VMware and the Cloud Management Business Unit to do better!  Thank you!  The new Workload Optimization features is just scratching the surface of what vRealize Operations and vRealize Suite can help you accomplish.  Now sit back and let self-driving do its thing!

Download and Try vRealize Operations here!

The post Workload Optimization – The Key to your Self-Driving Datacenter! appeared first on VMware Cloud Management.

Assess Your Upgrade Readiness – vRealize Operations 6.7

$
0
0

We have seen many new capabilities and enhancements introduced in vRealize Operations over the last several releases. vRealize Operations 6.7 is no exception to this trend. Though this is only a dot release, it is packed full of new and exciting capabilities to make managing your private and public cloud easier, so you can achieve self-driving operations and run production operations hands-off and hassle-free.

 

In this blog, we will be covering the steps to assess your environment before you upgrade and the upgrade process itself. If you would like to learn more about the new features included with this next release, please see the What’s New in vRealize Operations 6.7 blog.

 

The evolution of vRealize Operations (vROps) over its many years has seen a number of additional metrics incorporated into the product. With the release of vRealize Operations (vROps) 6.7, you will experience a reduced footprint by 30% and a simplified content so product analytics can run faster. Many of the newer metrics overlap or replace legacy metrics to eliminate unnecessary data points and provide precise performance and capacity analytics within vROps.

 

Are Your Ready to Upgrade to vROps 6.7?

The product team created a vRealize Operations Upgrade Assessment Tool to help you assess your upgrade readiness and determine how your custom content in previous versions might need to be updated so you can achieve a seamless upgrade.

 

Getting Started:

Below is a video that walks through running the vRealize Operations Upgrade Assessment Tool:

 

 

Understanding Results:

Let’s take a closer look at the report that is generated from the assessment tool. Once you open the generated report you will see the dashboard with a summary of affected objects for dashboards, reports, alerts, supermetrics, and others.

 

 

By selecting one of the object types from the Summary Report you will get a detailed list of the affected components, including component name, owner, widget, views, etc. You will notice that not all components might be relevant to you, so focus on those that you actively use.

 

 

By clicking on the metric from the Impacted Component Details page, you will see the exact metric in question with a hyperlink to see, when available, a recommendation on a replacement metric to use instead. For example, in the image above you have an affected dashboard named “NOC: VM Configuration”. This dashboard has a deprecated metric called “guestfilesystem|freespace_total”. If you click on the hyperlink metric name you will be brought to the exact metric in question with recommended actions to take.

 

 

You will notice that, in this example, there is a suggested replacement metric (Guest File System | Utilization (%)) for the deprecated metric (Guest File System stats|Total Guest File System Free(gb)). You can now go into the dashboard in question in vRealize Operations and replace the deprecated metric with the suggested replacement metric and ensure that the dashboard works before you upgrade to vRealize Operations 6.7.

 

Once you have completed the upgrade assessment and updated the affected components, if any were found, you can confidently upgrade to vRealize Operations 6.7.

 

How to Upgrade to vROps 6.7?

The upgrade process is identical to previous upgrades you might have performed. This upgrade has two parts. The first part will update the operating system while the second updates the actual vRealize Operations platform. Below is a short video that walks through the upgrade process:

 

 

As you can see from the video above, the upgrade process is simple, straight forward and identical to pervious upgades with one exception, the need to update the currency for your installation. Currency? Yes, vRealize Operations 6.7 includes cost analytics to optimize for performance and capacity. Once you log in for the first time post the platform upgrade you will see the following banner at the top of the window:

 

After selecting the Action dropdown menu you will select Global Settings, then select Currency to set the currency for the platform. Once the Set Currency window appears you can find the currency appropriate for your installation. Accept the warning that once you set the currency for the platform it can no longer be changed. Finally, select Set Currency to complete the upgrade process. Within a two-hour period you will start to see costing and performance data populated within the different areas of the vRealize Operations 6.7 platform.

 

Upgrade to vRealize Operations 6.7 to enjoy the major enhancements and new capabilities.

 

Visit the New Website for Technical Details

I have created a video to better illustrate the Workload Optimization solution and provides a nice overview.  You can also view several other videos for vRealize Operations 6.7 on our new website VMware vRealize Suite Technical Guides which is focused on technical resources you can use to get the most out of your vRealize Operations experience.

 

Additional Resources:

Upgrade Portal:

https://www.vmware.com/products/vrealize-operations/upgrade-center.html

Full List of Deprecated Metrics:

http://partnerweb.vmware.com/programs/vrops/DeprecatedContent.html

vRealize Operations Micro-site:

http://www.vmware.com/go/vrealize-guides

The post Assess Your Upgrade Readiness – vRealize Operations 6.7 appeared first on VMware Cloud Management.

Under the Hood: What’s new in Self-Driving Operations with vRealize Operations Manager 6.7

$
0
0

Today’s the day many of us have been waiting for – the GA release of vRealize Operations Manager 6.7 and it is impressive in both feature and scope.  You have probably read some of the blogs about this release already but in this blog post we will cover as many of the goodies and updates in 6.7 as we can.  So, grab a fresh cup of whatever and buckle up for self-driving operations!

New Intelligent Capacity Analytics

 

While the legendary capacity engine in vRealize Operations has served customers well for many years, it was finally time for an update.  This was a huge ask of our engineering staff and they really came through.  Let’s start by understanding what is under the hood and how it brings self-driving operations to your SDDC.

The new capacity analytics engine is smarter and faster, delivering updated forecasts every 5 minutes and projections 1 year into the future. This means that you get value from vRealize Operations on day one.  In fact, customers upgrading from previous versions of 6.x will be able to harvest 3 months of history to make the analysis and forecasting even more accurate.

Now you will see historical and predicted demand for CPU, memory and storage on easy to read graphs.

This shows not only the prediction but a confidence boundary that shows both an aggressive and a conservative estimate of growth in a single view.  The new capacity analytics provides the brains behind self-driving for optimizing performance and capacity all while giving you the control you need

Continuous Performance Optimization

 

You need to assure application performance, but you must also abide by operational and business intent.  With predictive analytics, vRealize Operations 6.7 can automatically take actions to balance workload and proactively avoid contention.  How does this work?

A cornerstone component of this hands-free operation is workload optimization which moves workloads between clusters ensuring they are always getting the resources they need.  Workload optimization considers your business and operational intents of your datacenters before making any moves.

Are you looking to drive better performance of your business-critical applications by balancing the clusters?  Or do you need to keep the cost per VM as low as possible by consolidating workloads onto as few clusters as possible, the consolidation and densification use case?  Do you have SLAs you need to meet?  Or license compliance concerns?  All of these can be met with vRealize Operations.

What’s Your Intent?

 

It’s not good enough to just move workloads around to balance them.  In fact, that can be harmful if lower priority workloads are moved into clusters or hosts that are serving critical VMs.  This is why vRealize Operations 6.7 lets you define your business and operational intent in policy.

  • Do you prefer to balance your workloads or consolidate (densification) them onto fewer hosts?
  • How much risk is acceptable? What headroom would you like for unplanned or burst demands?
  • How can you meet specific business requirements for SLA tiers, license policies, compliance and availability?

All of this is controlled by you.  Including Tag Based VM Placement, which allows you to set criteria for VM placement based on vSphere tags.

Operate Like a Cloud Provider

 

Cloud computing offers consumers seemingly unlimited resources – but we all know that AWS doesn’t have unlimited CPU, Memory and storage.  And of course, neither do you!  But the trick is having accurate and meaningful information about your capacity.  You need to quickly understand where you are having capacity shortfalls, how you can address those shortfalls, what your capacity costs are and quickly plan for new consumer demand… just like a cloud provider.

With vRealize Operations 6.7 that, and more, is available at the click of your mouse.  The capacity analytics gives you real-time data on how your capacity is being used and the ability to predict into the future with confidence.

Every five minutes, the capacity analytics is updating forecast models based on resource demand.  By the way, this include in-guest memory statistics for accuracy in meeting application needs.

Need more capacity?  You can easily find available resources, reclaim them, consolidate and reduce cost.  In fact, you can also plan for the future and decide for yourself if workloads are better off in the Public Cloud or in your datacenter (Private Cloud).

Quick Start

 

A new home page has been crafted to simplify your experience and guide you to the results you need.  The Quick Start page provides all the information you need to begin using vRealize Operations 6.7, broken down into simple use cases.

Optimize Performance:  A mentioned above this is all about Continuous Optimization leveraging Workload Optimization use cases (Balancing, Consolidation & Densification, SLA and License compliance).  This also provides a recommendations page that I often refer to as “a to do list”.  This is where I go to get recommendations by vRealize Operations at many of the object types.

 

Optimize Capacity:  Everything Capacity and of course the newly integrated Private Cloud Costing.  These go hand in hand.  View overall Capacity, view cost savings opportunities as a result of capacity reclamation opportunities.  Plan for future growth leveraging “What-If” scenarios to add workloads, and perform Public Cloud comparison.  Also assess the overall costs of the Private Cloud, and the cost of the clusters, Datastores, and so much more.

 

Troubleshoot:  The heart and soul of troubleshooting both structured and unstructured data. Address and resolve issues by leveraging actionable alerts, and integrated Log Analytics, as well as workflow-driven dashboards for objects types such as VMs, Hosts, Clusters, Datastores, and the natively integrated vSAN.

 

Manage Configuration and Compliance:  Here you can make sure your vSphere environment is secure and meets the requirements of the vSphere Security Configuration Guides.   Get a list of all the non-compliant symptoms and address these issues.  And for organizations that have the need to meet PCI-DSS and HIPAA regulatory compliance requirements, well vRealize Operations can also help with that!  Spot inconsistencies and unapproved changes in the vSphere environment leveraging purpose build dashboards for vSphere objects like VMs, Hosts, Clusters, and Virtual Distributed Switches.  Take control of your vSphere security posture!

 

Other user interface enhancements include bringing back the dark theme, updates to several widgets, in-context metric definitions, setting default dashboards and so much more.

You can also view several other videos for vRealize Operations 6.7 on our new website VMware vRealize Suite Technical Guides which is focused on technical resources you can use to get the most out of your vRealize Operations experience.

Summary

 

vRealize Operations 6.7 has made some incredible improvements inspired by many of you who continue to challenge VMware and the Cloud Management Business Unit to do better!  Thank you!  The new features and capabilities discussed here are just scratching the surface of what vRealize Operations and vRealize Suite can help you accomplish.

Download and Try vRealize Operations here!

 

The post Under the Hood: What’s new in Self-Driving Operations with vRealize Operations Manager 6.7 appeared first on VMware Cloud Management.

vRealize Operations within vCenter Plugin

$
0
0

With the release of the latest version of vCenter 6.7 there is a new capability to help achieve self-driving operations from within your vCenter environment with vRealize Operations being at it’s core. This is the new vRealize Operations within vCenter plugin which provides 6 new dashboards directly in the vCenter UI. This plugin was added to help with two things:

 

First, there are three dashboards that give a holistic view of your vSAN environment for the specific vCenter. You can see all the components that make up the entire vSAN landscape for the specific vCenter as well as see each individual cluster high-level characteristics, performance, and capacity. You can then quickly transition into the full version of vRealize Operations to get deep information on your vSAN landscape across multiple vCenter. You can read more about these capabilities in the following blog article.

 

Second, and the subject of this article, there are three dashboards specific to what’s going on in vCenter not necessarily related to just vSAN. Below I will show the dashboards specific to vCenter and give a couple highlights of each dashboard.

 

Before I get into the dashboards I want to discuss the version requirements and some points about the installation process.

 

Version Requirements:

vCenter Version 6.7

vSAN Version 6.7

vRealize Operation Version 6.7

 

Installation Options:

 

You have two options when configuring the vRealize Operations within vCenter plugin. If you are not currently a vRealize Operations (vROps) user, you can deploy an instance of vRealize Operations directly from within the vCenter UI. If you are an existing vROps customer, you can hook the plugin to your existing environment. For more information on licensing and usage please see this blog article.

 

Let’s get back to the good stuff!! The dashboards specific to vCenter that are in the plugin!

vCenter Overview Dashboard

 

With the vCenter Overview dashboard you get a top down view of the specific vCenter including number of datacenters, hosts, virtual machines, etc. You can immediately see if there are any alerts that might need your attention as well as information on HA and workload balance across the entire vCenter. One of my favorite bits of information is in the “What can be Reclaimed” widget. Here you can see a quick view of waste PLUS how much you could save each month by reclaiming this waste. That’s right, you can see reclamation costs right here in vCenter. Now you can quickly answer the bosses questions about how much we can save by reclaiming resources!

 

vCenter Cluster View Dashboard

 

The vCenter Cluster View dashboard gives similar information as the vCenter Overview dashboard but at the specific cluster level. Again, you can see quick views of possible alerts, capacity remaining, and reclamation opportunities. You also get performance information on CPU and memory as well as list of VMs that are potentially having resource constraints for CPU, memory, or disk. Luckily, in my environment, I don’t have any resource issues right now….I’m sure that will change soon!!

 

vCenter Alerts Dashboard

The final dashboard of the three specific to vCenter in the new plugin is the Alerts dashboard. Here you will see any alerts specific to the vCenter and it’s components. They are in order of criticality so you can make sure you are addressing the most impactful problems first. By clicking the link to the right of the alert you will be directed to the full version of vRealize Operations directly to the object that the alert is regarding. From there you can get all the benefits of vROps including very detailed information on the object and concern as well as recommendations and actions for remediation.

 

As you can see the new vRealize Operations within vCenter plugin gives you an amazing amount of information directly in the vCenter UI. These great dashboards are all powered by the awesome new capabilities that can be found in the latest version of vRealize Operations and can help you truly achieve self-driving operations in your environment.

 

Visit the New Website for Technical Details

You can view several other videos for vRealize Operations 6.7 on our new website VMware vRealize Suite Technical Guides which is focused on technical resources you can use to get the most out of your vRealize Operations experience.

 

 

The post vRealize Operations within vCenter Plugin appeared first on VMware Cloud Management.

Viewing all 242 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>