Wannacry and the importance of patching

A blog by Ross Devine, head of technical support.

I’m sure that you’ll all of heard about the outbreak of the ransomware WannaCry which was widely reported in the UK media due to the devastating effect it had on parts of the NHS.

wannacryThis Ransomware was propagated via a vulnerability that was patched out by Microsoft back in March.

We completely understand that patching has typically been regarded as a bothersome task that people would rather put off until tomorrow.

Hindsight, however, is all good but had the organisations and bodies kept up to date with their server and client patching they would not have suffered at the hands of the criminals that released this Ransomware into the wild.

This outbreak has shown us all that patching, no matter how mundane, should be placed at the core of our security plans rather than at the periphery. We advise all of our customers and businesses, in general, moving forward, to review their update schedules.

For all our customers that take Proactive Support, this is a task that we will be happy to assist with where required.

What this has done is thrust the importance of server patching for any organisation, or end user’s IT systems into the public consciousness, no doubt the next time you have an outage your MD or end user will be asking if the servers have been patched, what will your answer be?


For information on security and managed hosting services contact virtualDCS on 03453 888 327 or by emailing enquiries@virtualDCS.co.uk



Software as a Service (SaaS) is one of the fastest growing divisions within the cloud industry, but there is still a plethora of questions around the technology itself.

To assist developers on their migration to the cloud, we’ve compiled a list of FAQs around software hosting and Software as a Service technology in general. These queries have been compiled using our personal experience with customers, along with other blogs and articles online.

What’s the difference between Software as a Service and Cloud Computing?

software as a serviceCloud Computing is defined as ‘the practice of using a network of remote servers hosted on the Internet to store, manage and process data, rather than a local server or personal computer.’

The cloud can refer to anything that’s hosted remotely and delivered via the Internet, including software, which in this form of delivery is known as Software as a Service (SaaS). There are a number of services that can be provided through the cloud, such as ‘Recovery as a Service’, ‘Platform as a Service’ and ‘Software as a Service’.

How do I decide between a SaaS solution or a traditional solution, what are the benefits?

A traditional solution would comprise of shipping your software to site via disk, where the user or IT department would then install the application to the local machine. This in itself extends the sales process by a considerable amount of time, where with SaaS from the initial order the end user has access to the software within a matter of minutes. This is one of the many reasons end users are pioneering this method of software delivery.

Hosted software also provides an additional layer of intellectual property protection. With traditional software solutions the user would install the base code on a local machine, where it could theoretically be altered and replicated, but this is not the case with hosted software.

Another feature for both end users and developers is unified upgrades. With SaaS, developers can roll out unified software updates and patches, where the end user has prompt access to the latest software versions. For developers this ensures vulnerabilities are protected and eases the job of the support team in house and the customer’s IT department.

Should I partner with a cloud provider to deliver my Software?

Given today’s technology there’s no debate that cloud providers are capable of offering an efficient and highly available solution that can replicate (if not exceed) any internal IT infrastructure. One the other hand, hosting software on premise enables the developer to have full control over every aspect of the hosting process, but this is undoubtedly expensive and time consuming.

The answer to this question really comes down to your organisation’s needs. Does it have enough resource, experience and capital to purchase, maintain and upgrade its infrastructure to the required standards? Is this high level of control required? Would the organisation benefit from a partner as opposed to in-house ownership?

I’m thinking of partnering with a vendor, but what if they go out of business?

As hosting relationships are being merged every day, this is a legitimate concern, but the answer to the question will be dependent on the supplier themselves. They should, however have a reasonable answer and strategy in place should this ever occur. This should be discussed before any contracts are signed, where you can then judge whether this answer is acceptable to your business or not.

If you have any questions around infrastructure services or hosting your software online, the virtualDCS team is more than happy to help, simply contact us by filling out our enquiry form or by calling 03453888327. We also offer a free 30 day proof of concept for software hosting .

Finding the right Disaster Recovery provider

Each year the cloud computing market expands offering new and creative Disaster Recovery solutions, such as Veeam cloud connect, to customers worldwide.

As the cloud grows and solutions expand businesses can often get lost in a haze of options, especially around Disaster Recovery.

Consequently, many businesses are not asking the right questions when qualifying Business Continuity partners. This blog is designed to highlight some key elements for you to consider when vetting a new Disaster Recovery solutions partner.


When you’re considering trusting all your confidential information to a supplier, one of the first things you should do is consider accreditations. Accreditations act as an industry standard so you can be confident in the level of data security provided. It’s vital for your peace of mind that the organisation has at least an ISO 27001 standard.

Key locations

disaster recoveryEven though your data is stored ‘in the cloud’ I’m sure you know that it actually resides somewhere in a physical data centre which is selected by your provider. In light of this, it’s important that you know the location where your data would be stored.

For example, would your information be subject to the US Patriot Act? Is the data centre in a low-risk area or is it susceptible to natural disasters such as floods? If an incident were to happen, would the data be available in a second location?


Another valuable piece of the puzzle is experience, for many asking for references is an obvious and basic step, but it’s surprising how many companies still don’t do this. With many cloud services now being automated, it’s easy to see why this vital step would be missed out, especially when companies are displaying accreditations and alleged testimonials online.


You should also consider the support levels that your provider offers. Is the support team based in the same time zone as you? Can you speak to them over the phone or is it an email support system?

What are the SLAs for the support team? Are the answers to these questions acceptable to your business? In the least, they should be able to help you narrow down a potential partner.

For more information on disaster recovery solutions contact virtualDCS or visit our solutions pages.

Best practices for testing your disaster recovery plan

Testing a disaster recovery plan is critical to any successful business continuity strategy.

Without regular testing, businesses can’t be certain that in the event of an incident all critical files can be recovered, so to help we’ve compiled some useful tips to help ensure that your organisation benefits from disaster recovery planning.

Set goals and objectives

A successful disaster recovery strategy and testing period starts with the planning. Your organisation should put together a document that describes how the test is going to be completed, who is going to implement it and when it’s going to be completed. You should also describe the goals and objectives you aim to achieve from it.

Disaster recovery planIncorporate and prepare all relevant technology

The above strategy should include details on all the technology that is to be used in the procedure, in order to test the plan. Examples of technology would include network elements, hardware, applications and databases.

By reviewing this within the planning stage, you can ensure that each component is ready for use. Not only does this ensure a successful testing period by helping to avoid unnecessary costs, but it also sparks the question of ‘if it was unavailable, what would happen if sudden invocation was needed?’

Avoid conflicting schedules

DR tests can often take hours, so you need to schedule them as far in advance as possible, giving fair warning to others within your IT department to ensure that similar tests are also not being run at the same time.

Complete a post-test analysis

After completing the test, you should also include a post-test analysis, reflecting what you’ve learnt. These documents will help to highlight any discrepancies in both planning and implementation, while also acting as a knowledge base moving forward.

For more information on how to create and implement an effective disaster recovery plan contact the team on 03453 888 327 or by emailing enquiries@virtualDCS.co.uk

Businesses are falling out of love with inner-city datacentres

New research suggests that businesses utilising disaster recovery and infrastructure solutions are falling out of favour with inner-city datacentre facilities.

European datacentre power is estimated to grow phenomenally in the next few years, increasing by almost 20% between now and 2020. Although DC and platform storage trends are set to continue growing, user habits are now starting to change.

Datacentre pioneers

data centreThe Datacentre Europe pricing report claims that with over 150 datacentre providers present, the UK is currently the largest player in the datacentre market. Moving forward, the bulk of datacentre investments will be invested into sites further away from cities, due to customer preference.

Peace of mind

Alex Rabbetts, CEO of MigSolv commented on the findings, sharing his thoughts behind the sudden change: “The first reason is big cities are not very secure places to have datacentres. Why would you put your data in a place that is potentially a terrorist target? In the case of Paris and Amsterdam, there’s also a risk of flooding, also, real estate and employment costs are higher in cities and power security can also be a big issue.”

Location is everything

This has led to a growing cluster of datacentres emerging on the outskirts of cities such as Leeds, Sheffield and Manchester.

Anthony Day, an intellectual property and technology lawyer at legal firm DLA Piper also commented: “The datacentres still tend to be relatively close to the big cities, because you still need to have the latency and connectivity to connect to the bigger corporate clients. This is really important for firms in financial services, for example, as they’re involved with high-frequency trading,” he says.

“Also, particularly for colocation clients, if they have designated IT partners, they’ll need easy access to the site to fix any hardware problems. If the datacentre is too far away, it’s going to increase the potential time it takes to resolve any problems.”

Data centre costs

A further driving force for change is that inner-city datacentre operators have higher costs and tend to charge more for the services. This is once again supported by the research which finds that datacentre pricing in London is around 27% higher than other facilities outside of the M25.

Although datacentre and consumer trends may be changing, it is clear to see that the cloud is here to stay.

1 2 3 78