So there is a fun website called “CloudCred” that allows individuals or teams to participate in various tasks and challenges – everything from technical challenges to social and fun are covered and it is quite a good team building exercise, apart from the leaderboard challenge aspect!
One of the tasks is to blog about my team members and include links to their own blogs. We have quite a few team members so I can’t cover all of them, but here goes:
Of course this task is for our team – Xtravirt Limited, so we also have a company blog you can go and visit for some excellent content around the Cloud and Virtualisation industry.
Big data is quickly becoming a problem for enterprising companies as it presents difficulties in how to best analyze, monetize and capitalize large amounts of information within a business and across the globe. There are large amounts of sources that information and data comes from, leading to companies which are looking for partners that can help with their entire big data spectrum in assessing how the company is doing and all of the information surrounding it, as well as preserving that information for later dates and campaigns. Over the next decade, these needs will become greater for enterprising level organizations and it is important that they explore the options and solutions to the big data revolution in order to keep on top of their game.
The Sources of Data
There are a few solutions to managing the big data revolution, but first it is important to understand where all of this information is coming from. Of course, information across the globe comes from many sources and can be seen in several formats, but the most common places data is found are on social media, in phone and web applications, on customer profiles and in documents. Other places of vital information are found in financial transactions as well as emails, videos, and favorite subscribers. These main sources of data are continually being improved upon and gathering more information by the second. This can seem daunting to enterprising businesses but there are a few easy solutions to understanding the massive amounts of data coming from several different sources at all times.
The Solutions
There are three solutions that intertwine together to create one large solution to the big data revolution. The first is to turn you data center into a virtualized one or to find a data center that can host your company virtually. The second solution is to look into storage options that can hold all of the information being gathered and cope with it. Thirdly, detailed and complete infrastructure management (DCIM) is key to have in place. It is the combination of all three of these solutions that an enterprising business can manage the big data revolution.
Author Bio
Chad Calimpong has been recognized locally and nationally for his photography and video documentaries. He enjoys cooking, baking, and has a passion for technology and computers. He currently resides in Austin, Texas with his wife and two cats.
[Disclaimer]I’d like to clarify to readers of this blog, that I’m not affiliated with Dell, and have not been sponsored or paid to publish this article. Information and images in the above blog post have been provided to me from Dell, hence it being a guest blog post. I encourage anyone interested in solutions to the “big data” revolution to also explore other hardware vendor solutions and compare the availabile offerings in detail.
For a recent personal project I have been working on (vMetrics for WordPress), I had a requirement for some Icons, all virtualization related. I had a quick look around but couldn’t find many that had no strings attached. I therefore decided to create my own set. These are all original and I have created them myself. You will of course recognise some of the designs from the vSphere Client – these I used as inspiration and re-created from scratch.
Feel free to use these in your own projects, charts, or presentations. All that I ask is that you drop me a comment below to let me know if they were useful or not 🙂
PHD Virtual Backup & Replication is as the name would suggest, a complete, all-in-one backup and replication package. It is available in both VMware and Citrix XenServer flavours. I have long been a user of other Virtualization Backup Solutions and up until recently, never had the chance to play with PHD’s offering. A couple of weeks ago, PHD Virtual asked me to take a look at their Backup offering and put down my thoughts in the form of a sponsored review. That being said, I got the appliance installed in my lab environment and set about putting down my thoughts and observations about the product whilst using it for various backup, recovery, and replication tasks in my lab over the last two weeks.
Thoughts and Observations
Getting PHD Virtual Backup and running in my Virtual Lab environment was an absolute pleasure. Let’s just say the product definitely does what it says on the tin – installation was as simple as deploying the downloaded OVF file with the vSphere client (File -> Deply OVF Template), powering up the “Virtual Backup Appliance” and setting up some basic network settings. I would say the longest part of the installation for me was finding the line in the installation steps that said “Press CTRL + N to enter the network settings in the console” (which wasn’t long at all)! After entering my network settings, I had the choice of either browsing to the IP address of my appliance, or running the PHDVB_Install.exe file to get the Virtual Appliance “Management” console installed. I simply ran the installer and within 8 minutes or so (from start to finish) I had PHD Virtual Backup & Replication up and running in my vSphere lab.
The product supports VMware and Citrix (XenServer) in terms of hypervisor platforms. As stated above, in this review I will be working with a VMware vSphere 5.0 environment, and have therefore put the VMware edition to the test.
The observation I liked this far into my experience was that I didn’t have to make the choice as to whether I should be running my backup solution on a physical or virtual machine – its simple – the product is a Virtual Appliance. You deploy the initial appliance, and if needed, scale by deploying more virtual appliances. This means you don’t need to worry about managing a separate physical server(s) for your backup solution. This is just one of the reasons why PHD Virtual Backup is so easy to deploy.
The Virtual Appliance is pre-configured with the following specifications:
1 vCPU
1GB RAM
8GB disk
In terms of actual backup storage, you do of course have a few options.
Add a Virtual Disk to the Appliance itself (VMDK)
Configure Network storage (which could be):
a CIFS target
an NFS target
I chose to use a separate NFS mount on a Virtual Appliance I use for general purpose storage and backup in my lab, so I simply opened the appliance management console (right click in vSphere Client -> PHD Virtual Backup -> Console) and went to “Backup Storage” under “Configuration” to configure my NFS datastore as a backup target. You can also set up a couple of thresholds for warning / stop levels in terms of free disk space on your target, as well as enable/disable backup compression at this stage.
Backing up VMs
As the virtual appliance integrates in with the vSphere client, dealing with configuration tasks and actually setting up backups for your VMs is simple. No need to remote to another server or open up a console to your backup appliance VM. For my testing I configured a couple of different backup jobs – one to backup my VC, Update Manager and other VI VMs and one to backup a couple of general purpose VMs in my lab.
Backup speeds themselves were of a good level and on par with what I would expect from a product that utilises the VMware vStorage APIs for Data Protection (VADP). My first job that I ran took a little while to do the first initial (full) backup, but after this the subsequent runs of the backup job correctly used CBT (Change Block Tracking) to pick up on only changed blocks and copy these up, significantly reducing backup times of my VMs. VMware Hotadd is also utilised to help with quicker VM Backup times. Each job that runs gives you some detailed information on statistics such as:
Dedupe Ratios (Per VM and Per individual VM Disk)
Job average speed
Dedupe Ratios (Per Job)
Total amount of Data Written (useful for tracking how well CBT is working for example)
CBT Enabled/Not
Scheduling / Time details
A nice feature I found at this stage was the ability to look at a detailed job log right from the console. Let’s say you have a job or VM in a job that gave a warning or error message for some reason, and you wished to find out the cause. All you need to do is right-click the job name and select “View Log”. This pops up a window with a detailed, timestamped job log, allowing you to dig in to each step of the backup process and see what happened at each stage of the particular backup job.
File Level Restore
Restoring files is also a simple task. From the main console, there is a “FLR” (File Level Recovery) section which handles this process. I tested restoring files from within two different VMs using this console. Both were Windows Guests (one Server 2003 Standard and one Server 2008 R2 Standard VM). The process went as follows:
Under “Backup Catalog” where your previous backup jobs are listed, select the VM / VM Disk you would like to restore from.
Click the “FLR” button.
Go through the “Backup to Share” wizard and tick on the option to “Add target to iSCSI Initiator on this computer”.
Finish Wizard, and the VM Disks are mounted on the local machine and are now accessible.
Following the Wizard through to mount the VM Disk/s on local machine for File Level Restore
If you take a look at the Microsoft iSCSI Initiator tool you can see the two targets that have been mounted…
Incidentally, doing file-level restores from Linux/Unix based VMs can also be done by PHD VB. You just need to supplement the restore process with a third-party tool such as “Ext2explore”. You will follow the same process to mount the VM disks using the FLR wizard, but then just use Ext2explore to actually browse the mounted disk/s instead of Windows Explorer.
Restoring full VMs
I must say that I really like the features available in PHD Virtual Backup & Replication when it comes to doing full/partial restores of VMs. The wizard you use is nicely laid out and functional. You also get some great restore options such as; appending a “_restored” tag to the end of your restored VM name, auto-generation of a new MAC address for the restored VM, and changing of the default VM network (portgroup).
These are all great features when it comes to restoring VMs. Especially if you are restoring back into a production environment alongside the original VM and would like to ensure that there are no network conflicts for example. I have a dedicated, isolated VM network for testing (no vSwitch uplinks to physical adapters) so the option to change the default network on the VM to restore was perfect for me to test with.
VM Replication
PHD Virtual Backup also has replication functionality. Ideally you will want to have more than one VBA (Backup Appliance) running. For example, one in your DR Site, and one in your Production site. The appliance in your DR site will essentially connect in to the Backup Storage at your production site and hook into your backup jobs done there to find the latest changes of the VM backups done to replicate. So ideally when you set up a particular replication job, you should schedule it to start a short while after the relevant backup job completes. This ensures you get the latest changes replicated. The replication job will fetch only the changes since the last run. To enable replication, you just need to complete a once off configuration task using the PHD VB Console – adding a Replication Datastore. All this is, is pointing the appliance to an existing PHD VB Backup storage area – this can be a CIFS, NFS or VMDK Disk store that you are currently using for backups. As with VM Restores, you also get some useful options when replicating to change VM networks (VM portgroups) or auto-generate new MAC addresses for replicated VMs. I should also mention that you are also able to do replication even with just one VBA.
From the PHD Console, you are able to test your replicated VMs. This is quite a handy feature and after putting a replicated VM into “TESTING” mode, you can then use the vSphere client to power up your replicated VM and perform any testing and validation you might require. A snapshot is added to the VM to ensure that the state of the VM pre-testing is preserved. Once testing is complete, you simply just click “Stop Test” in the console. The VM is powered down and changes are rolled back to the pre-testing state.
Summary
Pros
“All in one” backup solution (everything you need in one Virtual Backup Appliance).
Simple and quick to deploy (or scale by adding more VBAs).
Good feature set (VM Backup, File Level Restore, Full VM restore, and Replication).
Easy to work with – simple/logical User Interface.
Integrates with the vSphere client for quick and easy access to Configuration, Backup, Restore and Replication options.
Great File-level restore – quick and easy access to files within VM backups (Windows or Linux/Unix).
Nice features available to change networking settings on restored VMs for testing or running alongside existing VMs.
Configurable VM Backup retention settings
Processing of multiple VMs at once in a backup job – allows VMs to be backed up in multiple streams instead of a “serial” fashion.
Cons
No network “fine tuning” options – example: fine tuning deduplication ratios when backing up over a WAN or LAN as opposed to direct disk storage. This would essentially allow you to have quicker backups for local storage jobs (albeit larger) or longer backups, but with smaller sizes to transmit over WAN links.
A couple of small caveats when using Replication (such as VM configuration changes are not replicated when changing settings on the original source VM, to the replicated VM).
No automation options – this would be nice to have in terms of backup, restore, replication or reporting automation. (A PowerShell module would be nice to have).
Conclusion
At the end of the day, PHD Virtual Backup is a great integrated Backup and Recovery product, with a little bit of room for improvement to add some extra “nice to have” features. The VBA (Virtual Backup Appliance) is dead easy to deploy and manage, and so is managing your backup, restore and replication processes. I think these are the best parts of the appliance. Whilst using it I found that each of the various Backup and DR processes I needed were easy to use through the combination of a well laid out UI and interface that “just works”. Access to files in VM Backups via the file-level restore wizard was a highlight for me – it didn’t take long at all to get at historic files and restore them using the “FLR” Wizard.
The appliance offers a good selection of options, but these could be bettered by offering some form of automation (perhaps PowerShell access) and some more advanced settings for power-users. My thought was that some more advanced backup job options could be made available for power users to fine tune compression or deduplication ratios.
A free trial of the product is available and I would definitely encourage you to take a look at this – as mentioned above, being so easy to deploy and manage it won’t be long before you are up and running. This Backup & Replication product does offer everything you need to handle DR for your VMware Virtual Environment.
Useful resources:
Installing PHD Virtual Backup & Replication for VMware vSphere
I recently wrote a (reasonably!) lengthy article on how to set up your own VMware vSphere lab or test environment consisting entirely of Virtual Machines, running off of one piece of host hardware. This is really handy as a lot of people new to Virtualization often think they need to purchase full on server equipment to create a white box, or find second hand servers off of eBay. Even more often, they make the mistake of overlooking the CPU feature set required to run vSphere – Hardware Virtualization, buying 64bit capable servers (good), but lacking the Intel VT or AMD-V feature-set required for vSphere (bad!)
This is when running everything virtualized comes in really handy. As well as keeping your hardware and lab requirements/size down, you have everything you need all in one installation of VMware Workstation. You’ll also be able to test out some really cool features that vSphere / vCenter Server has to offer – such as HA (High Availability) and DRS (Distributed Resource Scheduling). In the article I also make reference to a few best practises to have when configuring the real deal for production use. I hope this comprehensive guide is useful for those of you looking to set something like this up!