Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Friday, July 27, 2012

REDSTONE, IT Service Provider yang mengganti HP Openview dengan IT360


Global competition, shrinking IT budgets, ever-changing IT-business landscape and daunting complexity for IT monitoring and management. These are just a few of the problems Managed Service Providers (MSP) are facing today. UK-based Redstone is no different. As a leading network-based end-to-end managed services, infrastructure and technology provider it is responsible for the IT infrastructure of many of the country’s leading organizations, and it is not an easy task says Larry Dutton, Product Manager for Redstone’s managed services.

Overcoming operational challenges

"With our heterogeneous IT environment, it gets more and more difficult to manage our infrastructure, and this is made even more complex because multiple IT management tools are often required to monitor and control disparate yet interdependent elements” - Dutton.
Redstone provides a variety of managed services to meet the requirements of their clients. With such variety in Redstone’s offerings, it becomes difficult for the operations team to manage internal and external clients. Today’s customer is more demanding due to the following reasons:
  • Fear of IT outages and increasing dependency on IT for business
  • Paradigm shift in the way end users perceive and value IT products and services
  • Shrinking CAPEX budget
“Gone are the days when IT used to be easy with only a few resources to monitor,” said Dutton.
“We were using HP OpenView for a long time, but it just doesn’t fit into the current business landscape of exploding IT needs. It was bulky and complex, and we always had to depend on consultants and technical experts to re-configure it to keep up with constantly changing IT. It was increasingly becoming time consuming and expensive for us.”

ManageEngine Solution

It was clear during the evaluation discussion that Redstone would need an integrated view of the entire IT landscape to offer a best-of-breed MSP service. Dutton and the Redstone team determined that ManageEngine IT360 was the best fit for their needs for an integrated, easy-to-use web-based tool that eliminates the need to:
  • Install and manage multiple tool portals
  • Depend on experts to troubleshoot issues
  • Reduce margins to expand business
“The two best things I like about ManageEngine IT360 are the centralized dashboard feature where I can check the status of all customers’ infrastructure and alarms, and secondly the simplified web-based GUI,” said Dutton.
“It took us just a few weeks to get IT360 up and running. Even during the evaluation phase we knew that going with ManageEngine was a better choice than an alternative solution that we were evaluating in parallel.”

The ManageEngine Advantage

ManageEngine IT360 is designed from ground-up specifically for the purpose of integrated IT management, which is not the case with other vendors’ so-called integrated management products that were developed from acquired assets or cobbled together from piecemeal components. IT360 delivers advantages including:
  • Integrated console for operations and service management
  • Quickly and easily roll-out requested features and fix issues
  • Dedicated edition with MSP-specific features and enhancements
  • Affordability and ease of use
“Today, we are managing many customers’ infrastructure using IT360,” said Dutton. “We are experts in IT, and customers expect us to bring innovation and best practice to them proactively. With ManageEngine we have brought them a system that is relevant to their business requirements now and in the future.
“With more problems solved faster, we can also devote more time to growing our client base. The ManageEngine team and support have been brilliant; initially there were few synching issues but once things got in place, we have been just upgrading and renewing the license,” - Dutton concludes.

Thursday, July 26, 2012

How to evaluate virtual firewalls

Dave Shackleford

As their virtualized infrastructure grows, many organizations feel the need to adapt and extend existing physical network security tools to provide greater visibility and functionality in these environments. Virtual firewalls are one of the leading virtual security products available today, and there are quite a few to choose from; Check Point has a Virtual Edition (VE) of its VPN-1 firewall, and Cisco is about to offer a virtual gateway product that emulates its ASA line of firewalls fairly closely. Juniper has a more purpose-built Virtual Gateway (the vGW line) that is derived from its Altor Networks acquisition, and Catbird and Reflex Systems also have virtual firewall products and capabilities. So what do you look for when evaluating virtual firewall technology?

Virtual firewalls: management and scalability

Before digging into the specifics of virtual firewalls, it's important to determine whether you really need one or not. Very small virtualization deployments won't likely need one. However, with a large number of virtual machines of varying sensitivity levels, and highly complex virtual networks, there's a fair chance that virtual firewall technology could play a role in your layered defense strategy.. Note that it's highly unlikely that virtual firewalls will replace all your physical firewalls in most cases (although some consolidation is expected for those with a large number of physical firewalls). Assuming you need one -- now what?

There are several key considerations any security or network team should include when reviewing virtual firewalls. The first two aspects you'll need to evaluate are similar to what you'd evaluate for physical firewalls: scale and management. In terms of management,  you'll first want to determine whether the firewall is largely managed through a standalone console (usually Web-based), or integrates into the virtualization management platform (such as VMware's vCenter). For those with a standalone console, the standard management considerations apply, such as ease of use, role-based access controls, granularity of configuration options, etc. Another consideration is the command-line management capabilities of the virtual device, and how they're accessed. For example, most Cisco engineers prefer command-line IOS operation, and most virtual firewalls can be accessed via SSH.

Scale is critical in virtual firewalls, especially for very large and complex environments. Virtual firewall scalability comes down to two aspects. First, you'll need to determine how many virtual machines and/or virtual switches a single virtual firewall can accommodate. For large environments with numerous virtual switches and VMs on a single hypervisor, this can be a big issue. The second major scalability concern is the number of virtual firewalls that can be managed from the vendor's console, and how well policies and configuration details can be shared between the various virtual firewall devices.

Virtual firewalls: integration

A crucial evaluation point for virtual firewall devices is how the firewall actually integrates into the virtualization platform or environment. There are two common implementation methods. The first is the simplest: a firewall that is a virtual appliance or specialized virtual machine (VM). This can be loaded on a hypervisor just like any other VM, and then configured to work with new or existing virtual switches. The advantage to this model is its simplicity and ease of implementation, while the disadvantages include higher performance impact on the hypervisor, less integration with the virtualization infrastructure, and possibly fewer configuration options.

The second implementation method is to integrate fully with the hypervisor kernel, also known as the Virtual Machine Monitor (VMM). This affords access to the native hypervisor and management platform APIs, as well as streamlined performance and  lower-level recognition of VM traffic, but may also necessitate additional time and effort to properly install and configure the platform, and some highly customized virtualization environments may encounter stability issues or conflicts.

Other factors to consider when evaluating virtual firewalls include physical security integration and VM security policy depth and breadth. Virtual firewalls can "see" what is happening in a virtual environment, but can they relay alerts and security information to their physical counterparts? Look for any native or simple integration capabilities with physical firewalls, IDS/IPS and event management platforms. In addition, virtual firewalls can and should evaluate VM configurations and security posture above and beyond the traffic coming and going into the virtual environment. Some virtual firewalls can perform antimalware, network access control (NAC) and configuration management and control functions, all of which add significant value.

About the author:

Dave Shackleford is the senior vice president of research and the chief technology officer at IANS. Dave is a SANS analyst, instructor and course author, as well as a GIAC technical director.
build-access-manage at

Do you need virtual firewalls? What to consider first

John Burke, Contributor

The size of the virtual hole in enterprise security is daunting. Virtual firewalls may be a solution, but there are many factors to consider first.

What are virtual firewalls?

Virtual firewalls are virtual appliances that re-create the functions of a physical firewall. They run inside the same virtual environments as the workloads they protect. Because they sit inside the virtual environment, they apply policy to traffic that is invisible to the physical network, securing it without negating the agility that virtualization brings. They don't necessarily care whether the virtual machines (VMs) are in the data center or floating up to an Infrastructure as a Service (IaaS) environment.

Why the need for virtual firewalls?

Currently more than 97% of companies virtualize servers, and more than 53% of the workloads running in the data center are on virtual servers. During the conversion from physical to virtual, security structures between servers on the physical network are either dropped or they are maintained as physical systems.

When physical firewalls are used to address virtual traffic, this traffic must be routed out of the virtual environment, through the physical security infrastructure, and back into the virtual environment. This kind of hairpinning adds complexity, increases fragility and decreases the ability to move workloads around. What's more, things only get more difficult as enterprises extend their reach into IaaS environments. Currently, 17% of companies use IaaS, and an increasing number of IT shops are using it for customer-facing work.

Given this, it's clear that IT must secure both the internal virtual environment, as well as the external network. Virtual firewalls can be used for both environments.

Read more on virtual firewalls

How to evaluate virtual firewalls

Virtualization security challenges are plentiful; what's the answer?

Planning a virtual firewall strategy

If you're considering virtual firewalls for IaaS or other public cloud use, it is important to be sure the virtual appliance you use internally can be provided on your cloud provider's platform. If the virtual appliance only runs under VMware, but you need it to work in a Xen- or KVM-based IaaS environment, you will be out of luck.

Why a single-policy environment for physical and virtual firewalls?

It's best to integrate virtual and physical firewalls into the same policy environment, and it's better to use a single tool set for both. A single environment means business users can be sure that the same access controls will follow their data wherever it flows. A single environment also means IT doesn't have to:

maintain and synchronize activity across parallel environments;

keep up multiple staff skill sets;

continually maintain cross-platform verifications of policy equivalence;

manage multiple vendor and support relationships.

In an ideal virtual firewall scenario, you would have a single firewall vendor that provides a virtual platform running under the hypervisors you need, and you would have tools that manage both virtual and physical appliances.

Products capable of managing a single vendor's virtual and physical appliances together include Cisco's Secure Policy Manager, McAfee's Firewall Enterprise Control Center and StoneSoft's StoneGate Management Center. 

While multivendor environments are not ideal, there are few tools that manage multivendor firewall solutions. These vendors include FireMon and Tufin.

Virtual firewalls and IaaS: Potential challenges

Before you start jumping those hurdles for IaaS, consider whether a virtual appliance in IaaS will fit into your compliance or security framework. Using a virtual firewall in an IaaS environment, even if it is your own chosen virtual appliance, implies a level of trust in the cloud provider, since VM-to-VM traffic will be visible to whoever controls that environment.

If you can't assert this level of trust for the cloud platforms, you must instead resort to a host-based firewall or VPN solutions that filter traffic in and out of VMs. These consume more resources than virtual appliances because, for example, if a packet gets dropped once at an appliance, it might have to be dropped on every server that would have been sitting behind that appliance. Nevertheless, these host-based firewalls or VPN solutions require no additional level of trust in the cloud provider.

Breaking down IT silos for virtual firewall implementation

Lastly, a very practical point: Systems, security and network folks should not undertake virtual firewall rollout in a vacuum. All three groups must be involved in developing guidelines for when, how and why virtual firewalls will be implemented. All three must have a voice in planning and management, as well as visibility into the virtual firewall infrastructure. Without cooperation, all three teams are bound to step on each other's toes.

About the author: John Burke is a principal research analyst with Nemertes Research, where he advises key enterprise and vendor clients, conducts and analyzes primary research, and writes thought-leadership pieces across a wide variety of topics.
build-access-manage at

Tuesday, July 24, 2012

Network Virtualization, now will be part of VmWare


Nicira network virtualization architecture: The VMware of networking?

Shamus McGillicuddy, News Director

Nicira Networks emerged from stealth mode this month, articulating a new network virtualization architecture and software that it claims can do for networks what VMware has done for servers.

After four years of operating mostly in secret, the company timed its emergence carefully, not just announcing its Network Virtualization Platform, but also revealing the names of several major companies that have deployed its network virtualization architecture into production, including AT&T, Rackspace, eBay, NTT and Fidelity.

In this Q&A with, Nicira CTO and co-founder Martin Casado and vice president of marketing Alan S. Cohen talk about the company's network virtualization architecture.

Nicira Networks is describing itself as a network virtualization company. What is your definition of network virtualization?

Martin Casado: Network virtualization to me has three components. When you virtualize anything, what you end up with must look like what you started with. When you virtualize the network, the final solution must provide a network that looks like the original one; otherwise you limit the workloads that can live in this new domain.

When you [perform] x86 server virtualization, the operating system doesn't know it's not running on the physical machine. Network virtualization is where you build a solution where you can create logical networks on top of a physical network that [have] all the same properties of the physical network.

The second component is [that] all the mapping of the management of that logical [network] to physical view is done totally programmatically. With server virtualization, servers virtualize compute, storage and memory, and anytime things move within the server or new VMs [virtual machines] spin up, all of that has to be done automatically. The same thing [must happen] with network virtualization. You create logical networks, you expose them to VMs and then, anytime things change, it's all automatically patched up.

The last component is that [network virtualization] should be compatible with any hardware. It should work with any vendor. Virtualization does decoupling and the decoupling should be independent of the underlying hardware. Network virtualization is simply creating a logical view of the network and mapping it to a physical view.

Alan S. Cohen: When people talk about other approaches, like using OpenFlow, they're still tied to hardware. Network virtualization doesn't equal OpenFlow.

What is Nicira's Distributed Virtual Network Infrastructure (DVNI) and how does it compare to other examples of network virtualization architecture, such as software-defined networking?

Casado: Software-defined networking is just a general parad igm in which the control plane is decoupled from the data plane. You could use this to run a backbone network or a wireless network. Software-defined networking does not equal network virtualization. It's just one way of creating networks.

DVNI is a network virtualization solution where the intelligence resides at the edge of the network. It's controlled using software-defined networking, and it allows you to create a logical network that is fully independent of the hardware. An OpenFlow solution would try to emulate the same thing, but it would require OpenFlow hardware. We don't require you to change your existing hardware or upgrade. And if you do, it doesn't have to be on an OpenFlow-compatible design. It could be OpenFlow, but it doesn't have to be.

Another thing that differentiates us almost uniquely: We introduce a new address space, which means that our logical networks can look like physical networks no matter where we are. Even if the physical network is L3 [Layer 3] we can give you an L2 [Layer 2] network. Or if it is L2, we can give you an L3 network. We are totally decoupled from physical networks.

Most network virtualization solutions today don't provide you with a virtual network. They provide you with a subset of the existing network, a basic technique called "slicing." VLANs are slicing -- they will take the existing network and give you a piece of it.

Instead of giving you a piece of that, we give you an entirely new network that looks exactly how you want it to look. A VLAN will give you your own little segment of the world, but if you have IPv4 infrastructure, it won't allow your VMs to send IPv6 traffic. It doesn't change the way that the logical view looks, and it doesn't change the physical network.

With our approach, even if you have IPv4 infrastructure, we could allow the VMs to have IPv6. We introduce an entirely new world.

Can you tell us more about the intelligence at the edge of your network virtualization architecture?

Casado: Nicira is the main developer of Open vSwitch. [The intelligence is] either in an Open vSwitch [on the server] or at top-of-rack. For this announcement, it is in the Open vSwitch. Within the server, we deploy the Open vSwitch, whether Xen or KVM or VMware. And that Open vSwitch, under the coordination of a controller, will create a set of L2 and L3 tunnels [between the physical network and the server hypervisors]. With this, we can create an illusion of a virtual network that will allow us to have any VM run anywhere over any type of hardware.

What does Nicira's Network Virtualization Platform actually do?

More on network virtualization

Read Martin Casado's blog post on edge overlay software

Building a private cloud network

Network virtualization technology FAQ: What you need to know

Channel Chat podcast: Avaya talks network virtualization architecture

Casado: DVNI is a general architectural approach, which says you have intelligence at the edge. The Network Virtualization Platform is the product; it's our instantiation of DVNI.

How do you abstract the physical network? Are you using OpenFlow?

Casado: We have a set of servers that are controllers, and they talk to these edge devices -- the Open vSwitch or top-of-rack switches. That communication is using OpenFlow. Because Open vSwitch is something we developed, it doesn't matter that it's OpenFlow. It could be any other protocol and the customers wouldn't know the difference.

The magic in creating this new view is within this intelligent edge. We map between this virtual view of the network and the physical view of the network. When a packet leaves a VM, we do lookups in the virtual world and then we map that into the physical world. We send [the packet] to the physical world. Then we transport it back from the physical world into the virtual world, where we do some more computation on it.

It's very similar to how server hypervisors work. They manage these virtual address spaces and map [them] to the physical address space. We manage these virtual network address spaces and we map it to the physical address space along the edge in real time.

Who are your customers within an IT organization? Network architects? Virtualization administrators?

Casado: Primarily we target the cloud architects. Cloud architects understand why networks get in the way when they build these things out. We do work with some network architects, but they tend to be fairly forward-thinking.

Cohen: We've seen service providers that are already organized for cloud, and a certain set of enterprises have begun to organize around cloud. These are multidisciplinary teams -- people who have server virtualization skills as well as storage and networking experience. But that is a fairly nascent movement on the enterprise side. You will see more people start to organize their infrastructure teams into these cloud units. They'll break down the silos of, "I'm a server guy, I'm a storage guy, I'm a networking guy."

Do you need to build relationships with the network hardware vendors?

Casado: No, we don't. These will become two different problem domains. Eventually Microsoft and VMware will also take similar approaches. [The network] hardware will essentially become a backplane. It will become a fabric. That fabric will still have to be competitive. It will have to be competitive on price, on the ability to do QoS, on the scale it can achieve [and] on the latency. That will be a separate entity in the market from the virtual network, which provides the provisioning, the security policies, the QoS policies, isolation and things like that.

As soon as there are [hardware] partnerships in virtualization, you aren't really virtualizing. Virtualization in the past, by its very nature, decoupled the things that would be virtualized. While I think that hardware will adapt to be more amenable to virtualization, just like Intel [servers] adapted for VMware, I don't think there needs to be any tight partnerships between the [network] virtualization companies and the hardware.

You announced several significant customers (AT&T, Rackspace, Fidelity, eBay and NTT). What kind of scale are they achieving with your solutions?

Casado: We have production deployments with production traffic. I'm not allowed to give out the numbers because they are customer-sensitive. But I can tell you these are hundreds of servers and thousands of VMs. These aren't one rack, but are many hundreds of servers.

Are you specifically focused on cloud provider networks? At this point, what kind of enterprise would need this level of abstraction and control?

Casado: I don't characterize it as about cloud as much as it is about virtualization. I think people buy generally into [the idea behind] server virtualization, which is [that] you virtualize your servers and you should have some level of operational flexibility and vendor independence. But the truth is, you virtualize your servers and you have fairly limited operational flexibility and fairly limited vendor independence, particularly when it comes to the network. Our core focus is virtualized data centers, whether or not used in cloud model. We can add value just by unlocking all that latent potential for server virtualization.

Cohen: Because we have a software model, nobody has to buy a big box. We can start in the enterprise. The question is: Does the enterprise have the recognized need and the pain points? It's not a question of whether an enterprise is attracted to this value proposition. It's a question of where they are on the [virtualization] maturity curve.

Let us know what you think about the story; email: Shamus McGillicuddy, News Director

build-access-manage at