Quantcast
Channel: viktorious.nl - Virtualization & Cloud Management » Server Virtualization
Viewing all 86 articles
Browse latest View live

vCenter Orchestrator: Configure an MS SQL Server database connection


Gartner’s Magic Quadrant for Server Virtualization 2014 compared to previous editions

$
0
0

feaured-magicquadrant

Recently Gartner published their Magic Quadrant for Server Virtualization 2014 edition.
The x86 server virtualization infrastructure market is defined by organizations that are looking for solutions to virtualize applications from their x86 server hardware or OSs, reducing underutilized server hardware and associated hardware costs, and increasing flexibility in delivering the server capacity that applications need. The x86 server virtualization infrastructure market includes all x86-based workloads (that is, application, Web and database servers; hosted virtual desktops [HVDs]; and file, print and security servers) deployed on standard x86-based physical servers.
Although a lot is happening in the virtualization space, VMware remains the leader. The diagram of 2014 doesn't show that much a difference when compared to the 2013 MQ. We see Citrix's virtualization solution moved to the nice player's quadrant. A new player in the field is Huawaei, this vendor is added to the MQ. For you conveneince I've included the Magic Quadrant for Server Virtualization 2011-2014 in this article. Full details on this year's edition are available here. You can also read last year's post on the MQ.

2014

mq-sv-2014-400px

2013

magicquadrant

2012

magic-quadrant-2012

2011

magic-quadrant-2011

Veeam Availability Suite v8 – Be the first to know when it’s there!

$
0
0

featured-veeam

featured-veeam

Veeam offers one of the most popular backup/availability suites for virtualized (both Hyper-V & VMware) environments. The latest version of Veeam's solution it the availability suite version 8 which will be available soon.

Veeam Availability Suite v8 offers some cool features like High-Speed Recovery, Data Loss Avoidance, Verified Protection, Leveraged Data and Complete Visibility:

  • HIGH-SPEED RECOVERY - Enables low recovery time objectives (RTOs) of < 15 minutes; enables rapid recovery of the data customers want, in the form that they want it.
  • DATA LOSS AVOIDANCE - Avoids data loss by enabling low recovery point objectives (RPOs) of < 15 minutes and facilitating offsite data protection.
  • VERIFIED PROTECTION - Reliably restores files, applications and virtual servers when needed; ensures business resiliency through automated backup and DR testing.
  • LEVERAGED DATA - Leveraged Data: mitigates the risks associated with application deployment; puts your backups and replicas to work by testing changes in a production‐like environment before deployment.
  • COMPLETE VISIBILITY - Provides monitoring and alerting tools so that you can discover and be alerted to issues in your IT environment before they have a significant impact.

Now it's time to leave your e-mail address at the Veeam website to be the first to know when version 8 of the Availability Suite is there.

The post Veeam Availability Suite v8 – Be the first to know when it’s there! appeared first on viktorious.nl - Virtualization & Cloud Management.

Live from VMworld 2014: A closer look at Hyper-Converged & VMware EVO:RAIL

$
0
0

featured-evorail

One of the big announcements of VMworld 2014 is the introduction of VMware's EVO product line, including its first offering EVO:RAIL. After the announcement of EVO:RAIL several sessions popped up in agenda, focussing on this new product:

  • SDDC1337 - Technical Deep Dive on EVO:RAIL, the new VMware Hyper-Converged Infrastructure Appliance
  • SDDC1767 - SDDC at Scale with VMware Hyper-Converged Infrastructure: Deeper Dive
  • SDDC1818 - VMware Customers discuss the new VMware EVO:RAIL and how Hyper-Converged Infrastructure is impacting how they manage and deploy Software Defined Infrastructure
  • SDDC2095 - Overview of EVO:RAIL: The Radically New Hyper-Converged Infrastructure Appliance 100% Powered by VMware
  • SDDC3245-S - Software-Defined Data Center through Hyper-Converged Infrastructure
  • SPL-SDC-1428 - VMware EVO:RAIL Introduction (Hands-on Lab)

If you have access to vmworld.com, these sessions are expected to be published in about two weeks. If you're at VMworld now, you now where to go!

This article summarises what I've learned from the EVO:RAIL sessions I've attended at VMworld. So read on if you want to know more about hyper-converged in general and VMware's EVO:RAIL offering.

What is hyper-converged at how does this relates to EVO:RAIL?

Let's first take a look of what EVO:RAIL actually is. EVO:RAIL is VMware's hyper-converged solution. Hyper-converged refers to bringing together compute, storage and network in one box.

Let's compare three options when you're building an infrastructure and see how hyper-converged is different:

  • Build-your-own: A custom build solution based on the best brands available in market selected by you as a customer. For example you choose to run HP servers, with NetApp storage and Cisco switching.
  • Converged infrastructure: A converged infrastructure is a pre-defined combination of hardware, mostly available as one SKU through a single vendor and/or single point of contact. The vendor offers validated the design which guarantees correct functioning of the solution. With a converged infrastructure you're using standard components (servers, network, storage/SAN). Popular examples are Cisco/NetApp FlexPod/ExpressPod, VBlock by Cisco/EMC/VMware and UPC by Hitachi.
  • Hyper-converged is the new kid on the block. With hyper-converged you put computer + storage and network in some perspective in one box. For the storage part you're leveraging what is called "server-SAN". A Server-SAN uses the local disks in your servers to provide shared storage. Examples of hyper-converged solutions are Nutanix, Simplivity and VMware's EVO:RAIL.

To get an idea of the (hyper-)converged market take a look at Gartner's Magic Quadrant for converged infrastructures:

mq-converged

Unfortunately Gartner sees converged and hyper-converged as one solution in this MQ, but at least it gives you an idea of the players in the market. Notice that EVO:RAIL is not in the MQ, because this MQ is back from June 2014.

Regarding hyper-converged it's important to notice that these solutions where first a pre-defined combination of the hardware and software. This is now changing with Nutanix offering its software as a separate solution. VMware's EVO:RAIL is also software-only, VMware is working together with hardware partners for a complete solution. Simplivity is using rebranded DELL hardware, but will also be available on Cisco hardware in the future.

Advantages of converged are the ease of procurement (everything available through a single SKU/single point of contact), reduced complexity and the predictability  of the infrastructure. With hyper-converged you use even more standard (small) building blocks which are easy to manage, have a scale-out approach and a faster time to market.

Disadvantages of (hyper-)converged is the lack of customisation, specifically if your infrastructure requires some special configuration settings. The building blocks have to fit the load you're planning to run on the (hyper-)converged infrastructure (rigid configuration). You might face a vendor lock-in. Read this article for more information: To Converge Infrastructure or Not - That's the question.

VMware EVO:RAIL Hardware Specifications

evorailThis takes me to next point, what hardware is currently available and what are the hardware requirements for EVO:RAIL?

Currently EVO:RAIL is available on Dell, EMC, Fujitsu, Inspur, Supermicro and Net One hardware. Big names missing here are of course Cisco, HP and IBM.  What these guys are up to I don't know, although Cisco announced a partnership with Simplivity which as an alternative hyper-converged solution. As you can imagine there might be some conflict of interest between the new hyper-converged solution of VMware and proprietary solutions of these hardware partners (not yet available but may be in development). On the other hand it might also be the case that Cisco, HP and IBM (and other vendors) might step into EVO:RAIL in a second phase. It also took HP some additional time to have a VSAN ready node available.

Typical minimum hardware specs for EVO:RAIL are currently:

  • Two Intel E5-2620 CPUs with six cores each;
  • 192 GB of RAM;
  • 3x 1.2 TB HDDs and 1x 400 GB SSD;
  • 2x 10 Gbit NICs;
  • 1x 1 Gbit NIC for management;

Notice these are minimum requirements, depending on exact customer hardware requirements the server configuration can change. Another important point is that you will need 10 Gbit networking top-of-rack switches. These switches have to support multi-cast including IGMP snooping & IGMP querier because of VSAN requirements. VSAN? Yes, VSAN is part of and used in the EVO:RAIL solution.

One EVO:RAIL building block consists of 4 servers contained in 2U of rackspace. You can connect 4 EVO:RAIL building blocks, resulting a maximum of 16 physical servers on one cluster. EVO:RAIL is available through the hardware vendor as a single SKU, you buy both the hardware & the VMware software.

Let's now zoom in on the software included in EVO:RAIL.

EVO:RAIL Software Components

On top of the hardware EVO:RAIL uses standard VMware software, consisting of:

  • VMware vSphere 5.5 including the vCenter Server Appliance;
  • Using VMware VSAN for storage;
  • VMware LogInsight;
  • The VMware Hyper-Converged Infrastructure Appliance;

Notice that NSX is not part of EVO:RAIL, there has been some confusion about this. EVO:RAIL is actually using standard vSwitches! That's right, it's even not using the VMware's vDistributed Switch (vDS) because of availability concerns regarding the vCenter Server Appliance (VCSA). The VCSA is required for managing a vDS but also runs on the EVO:RAIL building block. This might lead into a "chicken and egg problem" in case the VCSA is not available because of a (network) problem.

The EVO:RAIL Hyper-Converged Infrastructure Appliance

The magic of EVO:RAIL happens in the Hyper-Converged Infrastructure Appliance (HCIA). The HCIA is responsible for the initial deployment, management and monitoring of your EVO:RAIL building block including the virtual machines running on it. The HCIA puts an extra, simplified management layer on top of the vSphere infrastructure running underneath. The HCIA presents a lean HTML5 based management interface to the user.

evorail01After putting your EVO:RAIL block in the rack and starting the HCIA, a wizard is presented to configure the solution. The HCIA is responsible for configuring your ESXi hosts, vCenter Server and LogInsight and uses VMware and standard OpenSource components for these tasks. The HCIA requires IPv6 and mDNS to discover its peers.

At the end of the deployment EVO:RAIL will do a comprehensive check of the final configuration. If any problem occurs, EVO:RAIL will allow you to fix your configuration.

The initial configuration of EVO:RAIL only takes around 15 minutes.
Each EVO:RAIL building block, consisting of 4 servers, requires a HCIA. A maximum of 4 HCIA's can be tied together resulting in an EVO:RAIL configuration of 16 ESXi hosts running in one VMware cluster. You need one vCenter Server Appliance for this configuration.

evorail07After you've deployed the configuration, it's time to deploy some virtual machines. This can also be achieved through the interface of the HCIA.

EVO:RAIL offers three VM sizes: small, medium and large. The specific configuration of each VM size can be adjusted according to your own requirements.

According to some tests conducted by VMware, it's expected you can run 250 Horizon View desktops (2 vCPUs, 2 GB RAM, 30 GB disk) on a 4 node EVO:RAIL cluster, or about 100 server VMs (2 vCPUs , 6 GB RAM, 60 GB disk).

evorail10The HCIA offers some dashboards for monitoring, these dashboards are specific to EVO:RAIL and not using vCenter Operations Manager/vRealize Operations. The dashboards offer some information on how your cluster is performing and focusses on the counters specific for EVO:RAIL.

If necessary you can still use the vSphere (Web)Client for managing your infrastructure. Because the HCIA doesn't save any information locally (and uses just standard VMware APIs), changes using one of the vSphere clients will show up in het HCIA interface as well.

Another interesting new feature is that EVO:RAIL offers seamless upgrades for the environment. It doesn't use VMware Update Manager for this, the upgrade feature is part of the HCIA which is pretty cool I think.

Q&A

At the end of session SDDC1337  (presented by Dave Shanley and Duncan Epping), which I visited, there was some time for the attendees to ask questions. For your reference I've included these questions and corresponding answers in this article.

Q: Is the Cisco Nexus 1000V supported as a virtual switch for EVO:RAIL.
A: No, currently only standard vSwitches are supported, even the vDistributed Switch by VMware is not supported [and I think there are no plans to support it in the future, see my comment on this above]. Support for the Nexus 1000V might come in a future version.

Q: How to deal with hardware/firmware upgrades, is this included in EVO:RAIL?
A: Hardware/firmware management is currently not part of EVO:RAIL and/or the HCIA.

Q: Can you mix hardware vendors in a EVO:RAIL environment/cluster?
A: Although this might be possible, it is certainly not advised. The question is: why do you want this? In a normal environment it's not common practice to mix servers of different hardware vendors in one cluster.

Q: What is the usable capacity of the VSAN datastore taking into account the minimum hardware requirements [see above]?
A: This depends on the exact configuration, but expect somewhere between 12.1 and 13.3 TB. [Notice: eventual VSAN capacity also depends on configured policies in regards to your availability requirements]

Q: EVO:RAIL minimum requirements are mentioned during the presentation [and included in this article], can we expect other hardware configuration options?
A: Yes, although exact offerings depend on the hardware vendor and may vary.

Q: Can you import existing virtual machines into EVO:RAIL?
A: Yes you can, these new VMs will show up in the EVO:RAIL/HCIA interface immediately.

Q: Is it possible to import/connect EVO:RAIL into an existing vCenter Server?
A: Yes, this is possible.

Some additional Q&A on VMware EVO:RAIL is in this article by René Bos. Marcel van den Berg's article on EVO:RAIL is also worth a read.

The post Live from VMworld 2014: A closer look at Hyper-Converged & VMware EVO:RAIL appeared first on viktorious.nl - Virtualization & Cloud Management.

Free solution: Veeam Endpoint Protection

$
0
0

The post Free solution: Veeam Endpoint Protection appeared first on viktorious.nl - Virtualization & Cloud Management.

veeam

veeam
At VeeamON 2014 (Veeam's datacenter availability event) Veeam announced a new solution called Veeam Endpoint Protection. With Veeam Endpoint Protection you can unsurprisingly protect...your endpoints. The good thing about the whole story is that Endpoint Protection is available for free. Veeam Endpoint Protection is supported on Windows 7 and higher, and Windows 2008 and higher!

Some more information on what the product actually does:

Veeam Endpoint Backup FREE is a product that allows you to backup your Windows-based computers to an internal or external hard drive, a NAS share or a Veeam Backup Repository. It writes the backups in VBK format, which is the same format as Veeam Backup & Replication. With this product, you will be able to protect files, volumes or your entire computer based on your own schedule or at logon or logoff.

Availbility

In novmber the beta of Veeam Endpoint Protection will be released, while GA is expiated early 2015. You can sign up for the beta here.

You can read the full story on Veeam Endpoint Protection in this blog.

The post Free solution: Veeam Endpoint Protection appeared first on viktorious.nl - Virtualization & Cloud Management.

First vBeers The Hague – Event report

$
0
0

The post First vBeers The Hague – Event report appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-vbeers

Last friday the first vBeers The Hague took place in restaurant Milu. About 15 people traveled to The Hague to see what vBeers is about...drinking a beer or two and having some chit chat about all things virtual & cloud. We had some geeks coming from the northern part of The Netherlands, one guy (you may guess who) all the way from Maastricht and of course some people from the The Hague region.

We want to thank VMware, Trend Micro and ITQ for being very generous and sponsoring some drinks and food. If you were attending, thanks for a great evening! The next vBeers The Hague will take place in January 2015, exact date will follow. In the meantime don't forget vBeers Eindhoven on November 28, there's also a VMUG Belgium coming up on November 21st.

[nggallery id=19]

The post First vBeers The Hague – Event report appeared first on viktorious.nl - Virtualization & Cloud Management.

NSX Basics: Creating a logical switch

$
0
0

The post NSX Basics: Creating a logical switch appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-nsx

In this post I will demonstrate how to configure a logical switch in a software-defined networking environment based on VMware NSX. The logical switch will have two virtual machines connected.

Before you can create a logical switch you first have to setup your NSX environment; I would recommend these excellent articles by Chris Wahl which clearly explain how to setup NSX.

Prerequisites

Before we create the logical switch, let's first check if the underlying VXLAN infrastructure works as expected. In a vSphere-based NSX environment, VXLAN is used as the transport network for all virtualized network connections. VXLAN creates so-called "virtual wires" to achieve connectivity between virtual machines in a certain logical network segment. You can configure VXLAN to use either the native VLAN or a designated VLAN:

nsx01

As you can see, my environment uses VLAN 10 for VXLAN in my environment. Also notice the MTU; this is 1600 for VXLAN and requires some additional configuration on your physical switch. In my lab I'm running a Cisco SG-200 26 Gigabit switch. Unfortunately this switch doesn't allow you to configure the MTU, although it does allow you to configure Jumbo Frames (MTU 9000), which also satisfies the MTU 1600 requirement:

nsx02

To test VXLAN connectivity you can enter vmkping in the service console, or use the connection test that is included in the logical switch (more on that later). Note that if you want to test your VXLAN connectivity using vmkping, the extra "++netstack" parameter is required:

~ # vmkping ++netstack=vxlan 192.168.10.201 -I vmk3

VXLAN uses a different network stack, so you need to specify that you want to test the network connectivity of VXLAN.

Configuring the logical switch

Configuring a logical switch is a pretty straight forward process:

  1. Select logical switches in the NSX management interface;
  2. Click the plus sign to create a new logical switch;
  3. Think of a descriptive name. You might want to include the subnet of the logical switch here;
  4. Select the transport zone and select unicast if you want the NSX controller to be responsible for the VXLAN control plane (default);
  5. Click Enable IP Discovery to enable ARP suppression;
  6. Click Enable MAC Learning to avoid possible traffic loss during vMotion.

nsx03After the logical switch creation has been completed, you can connect virtual machines to the logical switch. Virtual machines using a logical switch are connected using VXLAN virtual wires, which will appear on your distributed vSwitch.

You can verify logical switch operation using the monitor option. This option allows you to send pings between the participating hosts or test a broadcast domain. If you want to test VXLAN functionality choose "VXLAN standard" as the size of a test packet, or you can select "minimum" to run an ordinary network test.

nsx04

 

Notice that in this example the VXLAN failed, this has to do with a feature on my switch which doesn't allow non-standard ICMP packets:

Ping (ICMP) works only up to 1518 bytes and all the other kinds of data traffic should work up to the Jumbo frame limits.

The same network test succeeds when setting the size of the test packet to minimum. VXLAN works perfectly in my environment despite the failing test. Please note that configuring an MTU of 1600 on your physical switch is required for VXLAN to work.

The switch is ready for use now and you can connect virtual machines to the logical switch.

I hope this first article gives some insight on how to configuring a logical switch using VXLAN. In an upcoming article I will demonstrate how to use the vCO NSX plugin to create a logical switch. Happy networking!

The post NSX Basics: Creating a logical switch appeared first on viktorious.nl - Virtualization & Cloud Management.

Do’s and don’ts of implementing CA signed certificates in VMware vCenter 5.5

$
0
0

The post Do’s and don’ts of implementing CA signed certificates in VMware vCenter 5.5 appeared first on viktorious.nl - Virtualization & Cloud Management.

featued-ssl

In this article I want to share some do's and don'ts of implementing CA signed certificates in VMware vCenter 5.5. Although some tooling is available to automate the entire process, implementing certificates in a vSphere environment can be quite challenging. Depending on your exact configuration, you can end up with updating the certificates in over 7 different places, all linked to each other.

I hope you will find my tips & tricks useful, here we go:

  • First of all: make a snapshot of all components that are involved in the update process. You can always roll-back in case of problems.
  • You need access to the VMs running the different VMware vCenter services (if applicable).
  • You need the passwords of the administrator@vsphere.local account and an vCenter administrator account. You will also need the original database password.
  • Notice that the administrator@vsphere.local account cannot contain special characters such as &, ^, %, or <, because the configuration of the inventory service will fail. Read this VMware KB for more info.
  • If you're using a Windows Certificate Authority for the certificate signing, create a new VMware SSL certificate template. Instructions are available in this VMware KB.
  • Use the SSL Certificate Automation Tool, download available here. Important: only version 5.5 of this tool support vSphere 5.5, version 1.x doesn't support vSphere 5.5. The SSL Certificate Automation Tool only supports Windows 2008 en Windows 2012 based vCenter Server installations.
  • For your convenience: edit and update the ssl-environment.bat (part of the SSL Certificate Automation Tool) with values that are applicable to your environment.
  • Use the SSL Certificate Automation Tool for creating the Certificate Signing Requests (CSR's) and for updating the current certificates.
  • Run a command prompt and use option 1 in the SSL Certificate Automation Tool to plan the steps of the update process. Run a second command prompt to execute the actual steps using ssl-update.bat. Use "run as administrator" when starting the command prompt.
  • When uploading the CSR's (generated by the SSL Certificate Automation Tool) to a Windows CA, make sure you use the earlier created VMware SSL certificate template and download the certificate (not the chain) as Base 64 encoded .cer certificates.
  • You will also need to download the root certificate of your CA. Download this certificate (not the chain) in p7b format, open the certificate and export the certificate as base 64 X.509 .cer certificate.
  • You have to combine the the individual certificates for each server with the root CA certificate into a new .pem file. More information on this step and the previous two steps is in this VMware KB article. Use a good text editor, like Notepad++ or UltraEdit, for this task.
  • When updating your certificates, follow the exact steps as displayed by the "Plan you steps" option in the SSL Certificate Automation Tool.

For the rest: drink some beer, and cross your fingers. Good luck!

The post Do’s and don’ts of implementing CA signed certificates in VMware vCenter 5.5 appeared first on viktorious.nl - Virtualization & Cloud Management.


Identity-based firewalling with VMware NSX

$
0
0

The post Identity-based firewalling with VMware NSX appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-nsx

I recently got a question on VMware NSX: is it possible to create firewall rules that depend on the user that is logged on to a server or virtual desktop? The use-case is to implement (extra) network security to allow or block network access to certain applications/servers in the datacenter, depending on the logged-on user. VMware calls this "identity-based firewalling", which is one of the features in NSX.

To use identity-based firewalling you will need VMware vSphere, NSX and Active Directory. These solutions take care of monitoring which user is logged on to a desktop and changes/updates the pre-configured firewall rules accordingly. In this article I will show you how to configure an identity based firewall rule.

Connect to AD and configure the grouping object in NSX

The first step is to connect NSX to Active Directory. This step is completed on the NSX Manager under manage -> domains. Add the domain you want to use to NSX:

nsx-fw-01

An identity-based firewall rule in NSX is using Active Directory groups. You have to add the AD group you want to use to an NSX grouping object, available under manage->grouping objects. The next screenshot shows a new Grouping Object called "AD Group" which has a dynamic membership defined where "Entity belongs to TestGroup". Note that this second TestGroup is a group in the Active Directory. You can use the Grouping Object, and thus the AD group membership, in a firewall rule...that's exactly what we want!

nsx-fw-02

Configure the firewall rule

After you've succefully created the grouping object, it's time to create the firewall rule. This step is actually pretty straight forward:

nsx03-rule

In this example the rule's source is the grouping object AD group, while the destination is the gateway of the network. I've created a block rule which will block all traffic to the gateway if a member of the AD TestGroup is logged on to a virtual machine. In this example a member of the TestGroup is logged on to testvm01, so this results in a blocking rule for this virtual machine. After I log on to this virtual machine using an account that is not a member of the TestGroup AD group, testvm01 will be removed automatically from the rule:

nsx04-rule

I noticed that the rule changes after another user logs on, logging off the original account will not remove the VM from the firewall rule.

I hope this helps in creating some identity-based firewalling rules. Feel free to send me any questions, or if anything written here is incorrect or incomplete.

The post Identity-based firewalling with VMware NSX appeared first on viktorious.nl - Virtualization & Cloud Management.

VMware vSphere 6: Most important annoucements summarized

$
0
0

The post VMware vSphere 6: Most important annoucements summarized appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-vmworld-pics

Waiting is over! It's here, it's new, it's great - today VMware announced VMware vSphere 6. I've summarized all the latest news in this article, so read on to learn more!

New: Announced product versions

Apart from announcement of VMware vSphere 6, new product versions of other VMware products are announced:

  • vSphere with Operations Management 6.0 = vSphere 6.0 + vRealize Operations Manager Standard 6.0 (available since last December);
  • Virtual SAN 6.0, this is actually the second version of the VSAN solution;
  • vCenter Site Recovery Manager 6.0 - no new features, but just for compatibility with vSphere 6.0;
  • vCloud Suite 6.0 -  with support for vSphere 6 in the different components, vRealize Business Standard is now included;
  • VMware Integrated Openstack 1.0 - something to really look at!

Improved in vSphere 6: Increased vSphere Maximums

Maybe not that interesting anymore, but the maximums for vSphere have been raised again. vSphere now supports:

  • 64 hosts per cluster;
  • 8000 VMs per cluster;
  • 480 CPUs;
  • 12 TB RAM;
  • 1000 VMs per host.

VMs now support a maximum of 128 vCPUs, 4 TB of RAM, vNUMA aware hot-add ram and USB3.

Improved: Fault Tolerance

With Fault Tolerance you can protect a virtual machine by running a second 100% identical virtual machine on another host. One of the short comings of Fault Tolerance was that it only supports VMs with 1 vCPU. Fault Tolerance in vSphere 6 now finally supports multi-CPU VMs, with up to 4 vCPUs per virtual machine. 10 Gbit network is very much recommended when you plan to use FT on virtual machines with more than 1 vCPU.

FT protected VMs now support VADP enabled backups, including the required snapshot technology. Note that normal snapshots on FT enabled VMs are not supported.

Fault Tolerance protected VMs now always use Fault Tolerance protected storage, secondary storage is required here. It's now possible to "hot-configure" (enable) FT on a virtual machine.

New: Virtual Machine Component Protection

Virtual Machine Component Protection (VMCP) is a new feature in vSphere 6 and an automated response for All Paths Down (APD) or a Permanent Device Loss (PDL) situation. VMCP protects VMs against storage connectivity failures and misconfigurations.

If a APD or PDL condition occurs, VMs are automatically restarted on a healthy host. This is something which is beneficial for stretched cluster architectures, but is of course useful for any environment using some kind of SAN storage. VMCP is currently only available for storage architectures and not yet supporting  network problems.

New: vCenter Server 6.0 Platform Services Controller

The Platform Service Controller (PSC) groups Single Sign On (SSO), Licensing and a Certificate Authority (CA). The PSC replaces these separate components and combines these functionality in one solution.

The PSC comes as an embedded option, or in a centralized/stand-alone option when two or more SSO integrated solutions are available. With the PSC linked mode is completely integrated in vSphere: Microsoft ADAM is not required anymore. You can now also add a VCSA to a linked mode, you can even mix appliance- and Windows-based vCenter Servers.

New: vCenter Server 6.0 Certificate Lifecycle Management & Clustering Support

You can now use vCenter Server 6 for complete certificate lifecycle management. vCenter 6 can now act as a certificate authority for provisioning certificates to each ESXi hosts. The VMware Endpoint Certificate Service (VECS) can now store all certificates for the different vCenter services. The VMware Certificate Authority can act as a root CA or issuer CA.

Clustering support for vCenter Server 6 will be announced soon. This does only apply to the Windows option and not the appliance option.

Improved: vMotion

Some interesting improvements on vMotion are available:

  • Cross vSwitch vMotion - You can now vMotion from a standard switch to a standard switch, a standard switch to a distributed switch and from a distributed switch to a distribtued switch and vice versa;
  • Cross vCenter vMotion - You can now vMotion cross vCenter Server which will change compute, storage, network and of course the vCenter Server. Also read my article Future of Disaster Recovery with NextGen VMware Site Recovery Manager to cross vCenter vMotion use-case.
  • Long distance vMotion - Up to 100ms RTT, no VVOLs required, use cases: permanent migration, disaster avoidance, multi-side load balancing, follow the sun.
  • vMotion can now cross layer three boundaries, so a stretched layer two network is not required anymore.

New: vCenter Server 6 Content Library

With the new content library simple content management for VM templates, vApps, ISO images and scripts is introduced. With the content library you can store and manage content: you have one central location to manage all content. The content is automatically distributed over different vCenter instances. The maximum size of a content library is 64 TB, you can store a maximum of 256 items and you can have a maximum of 10 simultaneous copies. The synchronization of the content library occurs once every 24 hours.

Improved: vSphere WebClient

vsphere6webclientOf course you already *love* the vSphere WebClient, don't you? Probably not...but wait, there's the new WebClient! It's very much improved: improved login time, faster right click menu, faster performance charts. Also from a usuability perspective things have improved; the recent tasks pain is moved to the bottom and the right click menus are flattened. Also the Virtual Machine Remote Console (VMRC) has improved and looks more or less the same as the VMRC in the Windows vSphere Client.

Improved: Network I/O Control Version 3

With NIOC version 3, you can now guarantee bandwith to satisfy service levels. This can be applied at the vNIC level or at the Distributed Port Group level.

More information on VSAN 6.0 is here.

The post VMware vSphere 6: Most important annoucements summarized appeared first on viktorious.nl - Virtualization & Cloud Management.

Virtual SAN (VSAN) 6.0: What’s New

$
0
0

The post Virtual SAN (VSAN) 6.0: What’s New appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-light

Virtual SAN 6.0

With VMware vSphere 6, Virtual SAN 6.0 also comes available. Virtual SAN 6.0 is the second version of VMware's server side SAN solution. The most important improvements to the second version of VSAN are:

  • Support for an all flash architecture;
  • Major improvements on performance & scale - max 64 VSAN nodes per cluster, max 200 VMs per VSAN host, max 100K IOPS per host and a maximum VMDK size of 62 TB;
  • New high performance snapshots & clones;
  • Rack awareness to tolerate rack failures by using fault domains;
  • Support for direct attached JBODs, especially useful in blade environments;
  • HW based checksum & encryption.

Read on to get more details on this new version of VMware's Virtual SAN.

New: Virtual SAN Architectures

Virtual SAN 6.0 offers two architectures different architectures:

  1. Hybrid - Virtual SAN is using SSD disks/devices for caching and spinning disks (HDD) for capacity;
  2. All Flash - Virtual SAN is using flash storage for both caching and capacity.

Read the table below for some more details on these two options:

vsan6

In the case of All-Flash VSAN, flash storage responsible for caching will only do write caching and no read caching. It's recommended to use high-grade flash based devices for the cache. You can use lower cost read-intensive flash based devices for capacity. In a Virtual SAN hybrid architecture the flash devices for caching will serve two purposes: 30% is used for a non-volatile write buffer, while 70% of the capacity is used for read cache.

With all the improvements, I am sure that VSAN 6.0 can now satisfy the needs of your most demanding business critical apps.

New: High Density Direct Attached Storage / JBOD

vsan-hddasWith the new High Density Direct Attached Storage (HDDAS) option it's now possible to use VSAN in a effective way in blade environments. Both SSDs and HDDs are supported in a HDDAS configuration, as well in combination with local flash devices in the (blade) servers.

New On-Disk Format

Virtual SAN 6 introduces a new on-disk format to support higher performance characteristics and high performance snapshot & clones. If you want to make use of these improvements, an upgrade to the new on-disk format is required. The new on-disk format of VSAN 6.0  introduces a new VMDK type called vsanSparse.  vsanSparse based snapshots (as opposed to the redo log based snapshots in vSAN 5.5) are expected to deliver performance comparable to native SAN snapshots.

New: Proactive Rebalane

Another nice, new feature of VSAN 6.0 is proactive rebalance. Proactive Rebalance rebalances VSAN objects in case a new node is added to an existing VSAN cluster or in case disks are more than 80% full.

A Proactive Rebalance is performed through the Ruby vSphere Console.

New: VSAN Fault Domains

Fault Domains is a new VSAN feature that provides the ability to group multiple hosts within a cluster and define failure domains.

These new Fault Domains provide the ability to tolerate rack-, storage controller-, network- and power failures. In the future we might also be able to use the Fault Domains in a stretched cluster VSAN configuration, which is currently not yet supported. With the introduction of Fault Domains you can efficiently divide/place VSAN components in different racks / blade enclosures. An example is included in the next figure:

vsan-failure-domains

In this example four Fault Domains (FD) are defined, each rack is a FD. In this example the Failure To Tolerate (FTT) = 1, which means we need two copies and a witness. Because FD are used, the different VSAN components are all placed in the different FDs.

New Disk Serviceability Functions

VSAN 6 introduces some new disks serviceability functions:

  1. Light LED on failures - The disk LED automatically turn on in case of a failure;
  2. Turn on disk LED manually - Turn on a disk LED to locate a drive;
  3. Marking a disk as local - If a disk is not detected as local disk, you can tag/untag the disk in the GUI;
  4. Marking a disk as SSD - This option is now included in the GUI, if a disk is not recognized as an SSD device you can tag or untag it;

That's it for now, to learn more about What's New in vSphere 6 read this article.

 

The post Virtual SAN (VSAN) 6.0: What’s New appeared first on viktorious.nl - Virtualization & Cloud Management.

Project VRC ‘State of the VDI and SBC union 2015′– Sneak preview

$
0
0

The post Project VRC ‘State of the VDI and SBC union 2015′ – Sneak preview appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-vrc

featured-vrc

A couple of weeks ago I've asked you to participate in the Project VRC "State of the VDI and SBC union 2015" survey. This survey ran until February 15th, the full report including all the results will be published in a couple of weeks. I hoped you participated in this survey.

Let's take a look at what Project VRC and the State of the VDI and SBC union actually is:

Project ‘Virtual Reality Check’ conducted the third worldwide State of the VDI and SBC union survey in January and February 2015. This survey is focused the desktop virtualization space and participants are IT Professionals who are active in the desktop virtualization industry as IT vendors, IT partners and IT departments.

The goal of the survey is to share insights about usage, configuration and trends in the Virtual Desktop Infrastructure and Server Based Computing industry, ‘the state of the VDI and SBC union’. About 519 participants completed the full survey. Participants come from US, UK, The Netherlands, Germany and 20+ other countries.

As a sneak preview I can already share some of the results here at my blog viktorious.nl. The original survey includes almost 50 questions.

Which hypervisor is deployed for VDI workloads?

A first question I think is interesting is which hypervisor is currently deployed for VDI workloads?

The following bar-chart offers some insights:

[caption id="attachment_5238" align="alignnone" width="615"]projectvrc01 2015 results[/caption]

Let's compare these results with the results of the same question in 2014:

[caption id="attachment_5239" align="alignnone" width="615"]2014 results 2014 results[/caption]

As you can see vSphere is the most popular hypervisor for VDI workloads. Of course a shift is going on from vSphere 5.1 to 5.5; the total market share of vSphere decreased from 72,5% to 67,4%. Citrix XenServer is gaining some extra market share in this space and the Hyper-V share also slightly increased. In the 2014 version a question was asked about which hypervisor is used for server workloads, unfortunately this question was dropped in the 2015 survey.

Which storage technologies are used to host VDI desktops?

Another interesting topic is to see which storage technologies are used to host the VDI desktops. With all developments in the storage space, we could expect some interesting results here.

[caption id="attachment_5243" align="alignnone" width="615"]2015 results 2015 results[/caption]

And the 2014 results for the same question:

[caption id="attachment_5244" align="alignnone" width="615"]2014 results 2014 results[/caption]

Notice the small difference in the original question. It's interesting to see that the share of SSD enabled SAN storage decreased and the share for SANs with only spinning disks increased. This could be explained by the significant increase in usage of full SSD SAN solutions. Unfortunately there's no answer included in the question for Hyper Converged solutions leveraging a server-side SAN solution like Nutanix, Simplivity or EVO:RAIL/VSAN. The "other" answer increased a couple of points, maybe this represents usage of server-side SAN.

The full report

The official public release of this white paper is scheduled for Tuesday March 24. After a free registration at the projectvrc.com website you can download the report after it's available. If you participated in the survey you probably already received the full report.

I hope this was helpful, please don't hesitate to leave any feedback below.

The post Project VRC ‘State of the VDI and SBC union 2015′ – Sneak preview appeared first on viktorious.nl - Virtualization & Cloud Management.

How your organization can benefit from automation & orchestration

$
0
0

The post How your organization can benefit from automation & orchestration appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-vra

If your new to automation & orchestration you might want to read an article I've published on pqr.com. The article is only available in Dutch, but you can of course use Google translate for an English version.

Wat kan automation & orchestration voor u betekenen?

Eén van de ontwikkelingen die het laatste jaar een vlucht heeft genomen is automation & orchestration. Met automation & orchestration, of kortweg orchestration, kunt u uw beheerprocessen automatiseren en daarna beschikbaar stellen binnen uw organisatie in een self-service portaal. Zo kunt u bijvoorbeeld de uitrol van een virtuele machine automatiseren en de aanvraag hiervoor aanbieden binnen de catalogus van het self-service portaal.

PQR heeft bij diverse klanten diepgaande ervaring opgedaan met orchestration. Ik neem u in deze blog dan ook graag mee in deze wereld en leg u uit wat u voor orchestration nodig heeft én op welke manier deze technologie u en uw organisatie kan helpen.

Read on at the pqr.com website.

The post How your organization can benefit from automation & orchestration appeared first on viktorious.nl - Virtualization & Cloud Management.

A first look at Nutanix Community Edition

$
0
0

The post A first look at Nutanix Community Edition appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-nutanix

You probably heard about the latest initiative from Nutanix, Nutanix is releasing a Community Edition of their hyper-converged solution. Nutanix Community Edition (CE) is a free version of the Nutanix Operating System (NOS). NOS is the operating system that powers the Nutanix platform. With the Community Edition you can evaluate the solution on your own hardware. That's pretty cool I think...

Currently I'm in a private alpha group to test the software, but a public beta will be available soon. More info on this at the end of this article.

Some background information and some system requirements

Nutanix CE comes as an installable image which you can copy to a USB drive or Disk on Module (DOM). You can run Nutanix CE as a 1, 3 or 4 node cluster, which makes it very suitable for your homelab. Required is:

  • Intel CPUs with VT-x support, 4 cores minimum;
  • A minimum of 16 GB RA. 32 GB is recommended. More memory = more fun;
  • Intel based NICs are supported;
  • You will need at least one SSD drive with a minimum capacity of 200 GB used for the hot-tier;
  • For cold-tier storage a maximum of 3 disks is supported, this can be HDDs or SSDs;
  • Total number of drives is 4;
  • One USB drive or DOM to run NOS.

Because you can just use an USB drive, your lab/test setup is interchangeable with an existing ESXi setup on USB. Just switch the USB drive and you're ready. Don't forget to take care of the internal SSD and HDD drives you use for the Nutanix CE installation.

Note that Nutanix CE is running the KVM (Kernel Virtual Machine) hypervisor. This means things work differently and you will need some time to get used to KVM if you're a VMware or Microsoft virtualization person.

My personal lab setup

I've installed Nutanix CE succesfully on my lab server which is a HP Z800. Installation took me only half an hour and I didn't face any serious issues.

featured-nutanixMy configurationis:

  • HP Z800;
  • 2x Intel Xeon E5520 quad core proc @ 2.27 Ghz;
  • 72 GB RAM;
  • 3x HP 250 GB HDD;
  • 1x Samsung 850 EVO 250 GB;
  • 1x Kingston DataTraveler Micro 16 GB;

After the installation you can logon to the web-interface of CE, complete the configuration and start to deploy some VMs! Because I only have one lab server available I cannot test some of the distributed features which is a pity. What remains is still a pretty interesting Nutanix CE based environment.

How to join the Nutanix CE Beta

If you also want to start working with Nutanix Community Edition you can now join the public beta. The public beta will become available on the first day of Nutanix .NEXT conference, which is on June 8th...that's next Monday. Just register on the Nutanix CE beta website to get a hold of the software.

Learn more

If you want to learn more about the Nutanix platform I have some suggestions for you:

  • The "Nutanix Bible" website by Steven Poitras, including some very good background information on how NOS and Nutanix file system NDFS works;
  • The nu.school, the education platform of Nutanix;
  • The nu.school YouTube channel;
  • Nutanix documentation.

I think Nutanix is certainly making their technology available to a greater audience with the release of CE. After you've find your way around in NOS and KVM, you can have a lot of fun with the product and learn how this hyper-converged solution works.

That's all for now, stay tuned through twitter for more information on Nutanix Community Edition.

The post A first look at Nutanix Community Edition appeared first on viktorious.nl - Virtualization & Cloud Management.

Some questions and answers about VMware NSX #SDN

$
0
0

The post Some questions and answers about VMware NSX #SDN appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-nsx

featured-nsx

In the software defined datacenter (SDDC), software is the foundation that is powering the evolution of network and data center infrastructure. The SDDC increases flexibility, agility and scalability. In the vision of VMware, the SDDC consists of vSphere for virtualization of cpu and memory, VSAN and vVOLs for storage virtualization and VMware NSX for network virtualization. In this article I've included some common questions with answers about VMware NSX. So read on if you want to know more about VMware's Software Defined Networking (#SDN) solution!

What is network virtualization?
Network virtualization is of course....about virtualizing the network. It's about decoupling the network from the physical layer and implementing & configuring your network infrastructure in the virtual layer. VMware NSX offers virtual switches, routers, load balancers, firewalls and more. Of course you already know virtual switches, but NSX virtual switches are little different (more about that later). SDN increases flexibility, agility and scalability and is an important next step in your SDDC.

Do I need new hardware switches to implement #SDN?
In the case of VMware NSX, no! You can just use your existing physical network. However, some (limited) changes are required for NSX to function properly. First a minimum MTU of 1600 is required in the physical network. This is because NSX uses VXLAN as a overlay technique. VM network packets are encapsulated in the VXLAN packet. You probably also need to configure multicast in your physical network. Depending on the exact configuration you will need IGMP, IGMP snooping and sometimes PIM (Protocol Independent Multicast). Again, this is depending on your exact requirements and NSX deployment scenario. You can run NSX without the multicast part, but this will limit scalability.

Okay, and in regards to NSX: what components do I need to install?
With NSX life start with the NSX Manager. The NSX manager connects to your vCenter Server and deploys NSX controllers. The controllers (3 are required) are a prerequisite for NSX to function. After the controllers are deployed you can implement logical switches, distributed logical routers and the distributed firewall. You can also deploy NSX edges in case you need a permitter firewall, VPN or a load balancer. The NSX manager will also install the vSphere Installation Bundles (VIBs) on your ESXi hosts, adding some new kernel-level networking features: VXLAN, the distributed router and the distributed firewall. For VXLAN you will need to configure "VTEPs".

nsx02VXLAN, VTEPS? What is VXLAN exactly doing in an NSX implementation?
Using VXLAN, you can create logical networks for your virtual machines across different layer 2 segments. Logical networks are interconnected using logical switches and distributed logical routers. A VXLAN starts and ends at a VTEP, a Virtual Tunnel EndPoints. VTEPs are configured at the ESXi level and represented by a VMkernel port. A VXLAN can span layer 2 and layer 3 segments, it's not problem to route VXLAN traffic. In this way it's possible to stretch a layer 2 logical network across a layer 3 boundary! Wow, that will certainly solve some challenges. VXLAN is creating an overlay network, this network is used to shape your virtual network.

So about these logical switches and distributed logical routers, why do I need them?
You already know the (distributed) virtual switch (DVS) if you're using vSphere. The (NSX) logical switch and the distributed logical router (DLR) work a little different.

In a traditional environment you have different VLANs available, that are connected to each other through a (physical) router. Most of the times the core switch is the central router in your network. So if two virtual machines in different VLANs want to talk to each other, the network flow goes from VM A, through the DVS, through the physical network, physical router, physical network again (other VLAN), DVS and finally to VM B. In a world of logical switches and DLRs, each ESXi host has its own router instance. If VM A and VM B run on the same physical host, the network traffic will never exit the host because the routing takes place in the local instance of the DLR. If the VM's are running on different hosts, the network traffic only travels between the host using VXLAN and will never pass the physical router, that's pretty efficient right?

Wow that sounds pretty cool. Can I do more with these logical switches and distributed logical routers?
Well, the logical switches and routers also integrate with the NSX distributed firewall. Maybe you know vShield App; well the NSX distributed firewall is the evolution of vShield App and enables you to create per VM firewall rules, both on layer 2 and layer 3. The distributed firewall can be used to implement micro segmentation, which will certainly improve security in your SDDC! You can create firewall rules and link them to a particular VM or a group of VMs. A VM group is based on VM name, the guest OS, connected network, a particular cluster or datacenter. It's also possible to activate firewall rules based on the user that logs on to a particular VM. In an earlier article titled "Identity-based firewalling using NSX" I've described this scenario.

Now let's say I want to integrate my NSX environment with the physical world. What are my options?
With the distributed logical router (DLR) you can bridge or route network traffic from logical network to existing (physical) VLANs. With bridging you create a transparent connection so both virtual machines and physical servers/dekstops can communicatie with each other in the same layer 2 domain. You can also choose to connect a distributed port nsx01group on the underlying distributed virtual switch to a port on the distributed logical router. The DLR is connected to a VLAN Logical Interface (VLAN LIF) in that case.

Another option is to deploy an NSX edge and route traffic to one or more VLANs. An NSX Edge (f.k.a. vShield Edge) is a virtual machine that offers a broad set of services: routing, permitter firewall, network address translation (NAT), DHCP, virtual private network (VPN), load balancing and high availability. In this case the routing service allows you to route traffic from a logical network to a VLAN. Note that bridging is not option with NSX Edge.

Ok, and if I want to use this NSX Edge as my perimeter firewall?
You can use NSX Edge as a perimeter firewall, although it depends exact requirements if this is a valid use-case. The Edge firewall offers some good basic firewalling options, but lacks some of the advanced (IDS, IPS) features. Same counts for the VPN service which pretty powerful: you can use this service for RoBo scenario's (routed or stretched layer 2) and also for remote access scenarios. But, if you're looking for an enterprise level VPN solution, the NSX service might not be the best option...it just depends on what you're looking for.

Talking about security, is NSX offering any other security features besides the firewall options?
Well, a very powerful part of NSX is the service composer. With the service composer you can create security policies and apply these policies to security groups. A security group is a group of virtual machines based on for example the port group, cluster, logical switch, datacenter or virtual machine name. A security policy on its turn can include guest introspection services such as AntiVirus (f.k.a. vShield Endpoint), data security, network introspection services and/or firewall rules. With the service composer an administrator can graphically see information about security groups and policies.

You can extend the NSX operation model to third-party services. After registering and deploying the 3rd party solution, you can consume the service in NSX. Let's say you're using a 3rd party AV solution and the solution detects a virus in a particular virtual machine. The virtual machine is now tagged as infected. Because of this tag the infected policy  becomes active for this virtual machine. The policy applies two rules: isolate the virtual machine using the distributed firewall and remove the virus. After the virus is removed and the virtual machine is safe again, the infected tag is also removed and the virtual machine can continue normal operation.

A last subject I want to touch is integration of NSX with vRealize Automation (vRA). Especially when you're building multi-machine blueprints in vRA, NSX will help you to create network configurations for multi-tier applications. There's also a vRealize Orchestrator (vRO) plugin available for NSX. With this plugin is possible to add NSX functionality to your vRA virtual machine deployment workflows. In this way you can for example add a newly deployed virtual machines to a security group which is linked to a ruleset in the distributed firewall. Possibilities are endless!

It's time to wrap up now! I hope this was useful, if you have any questions please leave a comment below!

The post Some questions and answers about VMware NSX #SDN appeared first on viktorious.nl - Virtualization & Cloud Management.


VMworld 2015: What’s new in the SDDC

$
0
0

The post VMworld 2015: What’s new in the SDDC appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-vmworld-pics

VMware's Software Defined Datacenter (SDDC) approach extends virtualization concepts like pooling, abstraction and automation to all data center resources and services. The SDDC includes components for compute-, network-, and storage virtualization. VMware made several announcements in the field of SDDC:

  • VMware VSAN 6.1
  • VMware EVO SDDC

Read on to learn more about these announcements.

Virtual SAN

Since it launch Virtual SAN did already undergo some great improvements. At VMworld 2015 some interesting additional enhancements and improvements are announced:

  • Support for stretched cluster configurations, including stretched cluster health monitoring. In a stretched cluster VSAN scenario you can deploy virtual SAN between geographically dispersed locations. It's based on an active/active architecture. The streched cluster scenario is supported for both hybrid as well as full flash VSAN configurations.A VSAN streched cluster also involves the deployment of a witness appliance which can be hosted on vCloud Air or deployed to your own third site. Also introduced are fault domains and site read locality.A VSAN stretched cluster configuration support both L3 routed networks and L2 networks with a maximum latency of 5 ms.
  • Enhanced vSphere Replication - vSphere Replication now supports a minimum RPO of 5 ms between VSAN datastores. The 5 minutes RPO is exclusively available to Virtual SAN 6.x.
  • Support for FT, Oracle RAC and Windows Server Failover Clustering - VSAN now supports VMware Fault Tolerance, Oracle RAC and Windows Server Failover Clustering. Exchange DAG cluster configurations are only supported with file share witness quorums.
  • New SSD drives are supported: Intel NVMe and Diablo Ultra DIMM - NVMe and Diablo Ultra Dimm are both supported.
  • A two node cluster size is now supported - A VSAN two node cluster is now supported; this configuration requires a third node: the witness virtual machine.

VSAN 6.1 also introduces some improvements to usability and management. Think of upgrading through the UI, disk group disk bulk claiming and of course the required configuration options for stretched cluster. Also available are central health reporting options via vCenter alarms. These alarms are triggered in case of unsupported hardware or if a periodic health checks fails. The VSAN health check includes check for your stretched configuration and some pro-active test on virtual machines, multicast and storage performance.

VSAN 6.1 comes with the new Virtual SAN Management Pack for vRealize Operations for full integration with vRealize Operations Manager 6.vRealize operations delivers a comprehensive set of features to help manage Virtual SAN:

  • Global View
  • Performance Monitoring
  • Capacity Monitoring and planning
  • Health Monitoring and Availability

vrops-vsan01 vrops-vsan02 vrops-vsan03

The screen dumps give you a first impression of this integration.

VMware EVO SDDC

When it comes to hardware in a SDDC there are three approaches: BYOD (build your own datacenter), converged infrastructure (like FlexPod or VCE) or hyper converged (EVO:RAIL, Nutanix, Atlantis Hyperscale). A new approach is this field is VMware EVO SDDC, formerly known as VMware EVO:RACK. EVO SDDC consist of full rack with 24 servers, two 10 Gb Top of Rack switches, 1 Gb management switch for out of band connectivity. The first rack in your setup also includes two 40 Gb spine switches.

The EVO SDDC software layer includes vSphere, Virtual SAN and NSX for virtualization. For operations management vRealize Operations, vRealize Log Insight and optionally vRealize Automation and Horizon View are used. The EVO SDDC Manager is a new component and added to the configuration. The EVO SDDC Manager is responsible for updates to the entire stack, including BIOS updates. There are several partner plans with VCE, Quanta and DELL to build integrated systems based on EVO SDDC. You can use VSAN storage or connect external ethernet based external storage.

EVO SDDC installation consist of four simple steps: power the rack on, automatic installation of the rack, second boot completed by customer and the customer can create workloads. Several physical racks can be combined into one virtual rack. Each virtual rack has a 3 host management cluster which runs the management components. On EVO SDDC you deploy one or more workload domains, for example a production domain, test/dev domain and/or a domain for VDI. A workload domain has its own vCenter management server, NSX manager and NSX controllers. A VDI domain adds the Horizon View components to the domain.

I hope you will find this useful, please stay tuned for more posts this week straight from VMworld.

Also read:
- VMworld 2015: What's new in SRM 6.1
- VMworld 2015: vCloud Air improvements and new features

The post VMworld 2015: What’s new in the SDDC appeared first on viktorious.nl - Virtualization & Cloud Management.

Atlantis to announce USX 3.0 and an update on HyperScale

$
0
0

The post Atlantis to announce USX 3.0 and an update on HyperScale appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-atlantis

featured-atlantisAt VMworld it's not only VMware that's announcing new stuff. Atlantis Computing will announce version 3.0 of their USX solution. Because HyperScale (Atlantis' hyper converged solution)  is using USX, we also get a new version of Hyperscale. This is a next update after the release of USX 2.2 back in February '15 and the introduction of HyperScale in May.

USX 3.0 brings VVOLs to USX and HyperScale. The VASA provider, required for VVOLs, is part of the USX manager. USX sits between your storage solution and the hypervisor, which means that in some way VVOLs now become available for all available SAN solutions if you're using USX. Because HyperScale is using USX, HyperScale can now also leverage VVOLs. This makes Atlantis the first and currently only hyper converged solution that supports VVOLs. You can check the HCL here to get an overview of  supported VVOL solutions.

Another new feature in USX 3.0 is SmartSnap, which will offload your vSphere level (vCenter) snapshots to Atlantis USX. USX uses FastClone to snap a virtual machine using meta-data manipulation only. This makes snapshotting pretty fast, because a copy on write is not required.

With Remote Replication, also new in USX 3.0, you can replicate between USX and HyperScale instances. Currently a one hour RPO is available; initial configuration is simple and just "one click".

The last things Atlantis introduced is Atlantis Insight. Atlantis Insight provides you options for proactive software updates "with the push of a button" and alert & log messages sent to Atlantis for timely case resolution. Going forward, new features wil be added to Atlantis Insight.

The official press release on USX 3.0 and HyperScale is expected tomorrow.

That's it for now, stay tuned on twitter for more updates during VMworld 2015. Hope this was helpful, you can leave your comments below.

The post Atlantis to announce USX 3.0 and an update on HyperScale appeared first on viktorious.nl - Virtualization & Cloud Management.

Use RVTools to create an audit trail for your vSphere environment

$
0
0

The post Use RVTools to create an audit trail for your vSphere environment appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-light

Today a little trick on how to use RVTools to create an audit trail.

Maybe you want to know when the size of a particular virtual machine disk has changed, or you want to have historic information on virtual machine placement (for example when dealing with Oracle licensing). RVTools can help here.

If you don't know RVTools you should go directly to www.robware.net. The tool is the most powerful option for vSphere inventory collection. You can create a csv or Excel file containing inventory data of your ESXi hosts, virtual machines, datastores, networks etc. RVTools is available for free, and should be in your tools library.

A powerful feature that not all people know, is running RVTools in batch or command-line mode. You can use this option to create cvs or Excel files straight from the command-line.

Some examples (from the RVTools manual):

RVTools.exe –passthroughAuth –s vc5.robware.local -c ExportAll2xls -d c:\temp

or

RVTools.exe -s virtualcenter.domain.local –u userid –p password -c ExportAll2xls -d directory

Now add this command line to task scheduler of Windows, and voilà you've got you RVtools audit trail creator. The manual includes a batch file example which make thinks even simpler.

Have fun! Hope this was helpful.

The post Use RVTools to create an audit trail for your vSphere environment appeared first on viktorious.nl - Virtualization & Cloud Management.

Some thoughts on upgrading a vRO license when using the embedded DB

$
0
0

The post Some thoughts on upgrading a vRO license when using the embedded DB appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-vro-orchestrator

vRealize Orchestrator (vRO) is VMware's orchestration tooling and includes very powerful workflows and integration options with all sorts of solutions. vRO uses a database to store workflows and configuration data. Options are MS SQL Server, Oracle and an embedded Postgres database. The latter option requires some specific attention, especially when you want to use vRO based Postgres beyond the 90 days evaluation period.

What's the case? Well if you're using the embedded Postgres DB some options are not available on the vRO configuration page (from the vRO manual):

When the database is embedded, you cannot set up Orchestrator to work in cluster mode, or change the license and the server certificate from the Orchestrator configuration interface. To change the server certificates without changing the database settings, you must run the configuration workflows by using either the Orchestrator client or the REST API.

You cannot change/upgrade the license in the configuration page; you must run a license configuration workflow available in Library>Configuration>License>Enter license key or Use vCenter Server License. So far so good, BUT a problem arises when your license expires; the vRO service is stopped and cannot be started because....no valid license is available! A typical chicken and egg problem I would say.

Conclusion: When using the embedded database change your license right after you have completed the vRO installation and don't wait until the license expires.

Question: But if the license has expired, what are your options? I'm looking for a solution here; you need the workflows to update the license, but the workflows aren't available because you don't have a license. Help is appreciated; through twitter or leave a comment below. I'm talking about the Windows installable vRO here, but I presume the case is the same for the vRO appliance.

The post Some thoughts on upgrading a vRO license when using the embedded DB appeared first on viktorious.nl - Virtualization & Cloud Management.

vCenter 6 upgrade: Source vCenter Server schema validation found an issue.

$
0
0

The post vCenter 6 upgrade: Source vCenter Server schema validation found an issue. appeared first on viktorious.nl - Virtualization & Cloud Management.

featured-vsphere6-problem

During an upgrade from vCenter 5.1 update 3 to vCenter 6 update 1, I encountered the following error:

vcenter6-upgrade-error01

Error: Source vCenter Server schema validation found an issue.
Resolution: Read the vcdb_req.err file and address the issues found.

The vCenter Server installation is using an external Microsoft SQL Server 2008 R2 SP2 database.

Errors in vcdb_req.err

A short investigation of the vcdb_req.err showed different errors:

WARNING: Cannot execute statement(rc=100).
DELETE FROM VPX_TABLE
^^^^^^^^^^

WARNING: Cannot execute statement(rc=100).
DELETE FROM VPX_INDEX_COLUMN
^^^^^^^^^^

WARNING: Cannot execute statement(rc=100).
DELETE FROM VPX_SCHEMA_HASH
^^^^^^^^^^

and also:

1 [42000](50000) [Microsoft][SQL Server Native Client 10.0][SQL Server]ERROR ! Missing indexes: VPX_HIST_STAT1_47.IX_VPX_HIST_STAT1_47_TIME_ID; VPX_HIST_STAT1_48.IX_VPX_HIST_STAT1_48_TIME_ID;

Solution

The first issue probably has to with a permission issue. Various posts on the VMware communities forums talk about temporarily adding the db_owner permission to the MSDB database for the vCenter account that is connecting to the database. Another option is to temporarily configure sysadmin rights for the vCenter account that used to connect to the SQL Server database.

Also walk through the steps in "Prepare Microsoft SQL Server Database before upgrading to vCenter 6.0". The cleanup_orphaned_data_MSSQL.sql script, available on the vCenter 6 ISO will clean up your database and smoothen the upgrade process.

The second issue reports some missing indexes in the SQL Server database. I ran into this VMware communities post, which discusses the same error only for a different table/index. Missing indexes in my case are  IX_VPX_HIST_STAT1_47_TIME_ID and VPX_HIST_STAT1_48.IX_VPX_HIST_STAT1_48_TIME_ID. Steps to take are (from the communities post):

  1. Open the SQL Management Studio, navigate to the affected table and click Indexes.
  2. Create a new Index, name it "IX_VPX_HIST_STAT1_47_TIME_ID", set the type to "Nonclustered" for this specific index (note: exact settings differ per index).
  3. Add TIME_ID as an index column.
  4. Add COUNTER_ID and STAT_VAL as nonkey columns.
  5. Repeat the steps for the table with 48 in it.

Note 1: The index configuration should be exactly the same as the other history data tables.

Note 2: If you don't feel comfortable editing SQL Server, create a call with VMware Global Support Services (GSS). All information is from my own experience, use these guidelines at own risk.

After running through these steps, the upgrade to vSphere 6 continued as expected.

Another KB article on the "Source vCenter Server schema validation found an issue" problem is available in the VMware Knowledge. However, this KB article was not applicable to my situation. And there 's this VMware KB article, which discusses different database issues when upgrading to vSphere 6.

I hope this was helpful, please leave any comments below.

The post vCenter 6 upgrade: Source vCenter Server schema validation found an issue. appeared first on viktorious.nl - Virtualization & Cloud Management.

Viewing all 86 articles
Browse latest View live