Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Wednesday, April 24, 2013

Microsoft saja menggunakan OpenFlow

Microsoft uses OpenFlow SDN for network monitoring and analysis

Shamus McGillicuddy

SANTA CLARA, Calif. -- Microsoft is using an OpenFlow software-defined network to capture and analyze traffic for network security and monitoring tools in its Internet-facing and cloud services data centers.

The OpenFlow-based tap aggregation system, called Distributed Ethernet Monitoring (DEMON) Appliance, is an alternative to expensive network packet brokers -- the specialized appliances that aggregate network taps and SPAN ports. Microsoft Principal Network Architect Rich Groves presented DEMON at the Open Networking Summit Tuesday.

Groves did not reveal which commercial software-defined networking (SDN) products Microsoft is using to enable DEMON, but he described the use of merchant silicon-based switches and an SDN control system to build the solution. Only a small number of vendors have announced products and features that enable SDN-based tap aggregation. For instance, Arista Networks announced DANZ, a feature set on the firmware of its merchant silicon-based 7050 switches that provides the ability to aggregate, replicate and capture traffic for networking monitoring applications with advanced features like precision timestamping. Big Switch Networks sells Big Tap, a network monitoring application that runs on top of its controller and that can turn an OpenFlow network into a continuous monitoring network.

Groves explained that using a traditional network packet broker to do tap and SPAN port aggregation wasn't feasible with the scale of the network he needed to instrument. He was looking for a system that could monitor thousands of 10 Gigabit Ethernet (GbE) links per data center. Given that his network has top-of-rack switches with as many as 32x10 GbE uplinks, the sheer number of monitoring ports needed made monitoring with a packet broker unfeasible from a scale and cost perspective.

DEMON enables data center-scale packet capture and analysis by turning merchant silicon-based switches into virtual appliances. "We have a layer of switches that do nothing but terminate monitoring ports," Groves said.

OpenFlow also allows Microsoft to create so-called service chains in DEMON. Network engineers can create policies that send the same traffic stream through multiple points of analysis and inspection.

More on SDN use cases

How OpenFlow FlowVisor paves a path toward open network virtualization

SDN could make Network as a Service a reality

Microsoft has also started programming application programming interfaces (APIs) on the system to do more advanced and proactive traffic analysis. "We can set up 24-by-seven monitoring of TCP events for critical systems," he said. "We are building triggers based on changes to add or modify policies. Applications can start to troubleshoot themselves. We have the ability to have a network management system that receives syslog traffic from network devices. If it sees an uptick of syslog entries, it can program the APIs to capture more interesting data [relevant to the surge in syslog traffic]."

Could SDN have an impact on unified communications?

SDN plays a role in network security

"There was no way we could have done this without the [OpenFlow] system we partnered on," Groves said. "To use OpenFlow here helps us scale this method, and with a controller we were able to scale as large as we needed."

The only limitation Groves has run into is the number of flow entries he can program into his merchant silicon-based switches. He said he's generally limited to about 750 SDN flows per switch, which is fine for DEMON's purposes, "but more is always better."

Let us know what you think about the story; email: Shamus McGillicuddy, news director.

Follow @ShamusTT
build-access-manage at dayaciptamandiri.com

Apa sih NFV (Network Function Virtualization) ?


Network Function Virtualization or NFV Explained
  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
rate this
Last Update: Feb 14, 2013 | 02:00
Viewed 10997 times | Community Rating: 5
Originating Author: Steve Noble

Contents

Introduction

While confusion reigns when it comes to the meaning of Software Defined Networks, the newly separated concept of Network Function Virtualization or NFV is attempting to separate itself from the fray.
Network Function Virtualization can be summed up by the statement that: "Due to recent network focused advancements in PC hardware, Any service able to be delivered on proprietary, application specific hardware should be able to be done on a virtual machine". Essentially: Routers, Firewalls, Load Balancers and other network devices all running virtualized on commodity hardware.
Source: Steve Noble

NFV Technical Background

NFV was born in October of 2012 when AT&T, BT, China Mobile, Deutsche Telekom and many other Telcos introduced the NFV Call to Action document. In order to increase velocity, a new committee was set up under the ETSI the European Telecommunications Standards Institute. This committee will work on creating the NFV standard.

What Makes NFV Different

While PC-based network devices have been available since the '80s, they were generally used by small companies and networking enthusiasts who didn't or couldn't afford to buy a commercial-based solution. In the last few years many drivers have brought PC-based networking devices back into the limelight, including: Ethernet as the last mile, better network interface cards, and Intel's focus on networking processing in its last few generation of chips.
Today many vendors are producing PC-based network devices. Advancements in packet handling within Intel's processors, allowing processor cores to be re-programmed into network processors, allow PC-based network devices to push 10's or even 100's of Gbp/s.

Adding Network APIs To Devices Allows For Higher Performance

For the last few years, network device vendors have been building network APIs such asOpenFlow into their devices. Having an API to interact with network devices allows for the separation of the control plane from the forwarding plane. The control plane is run on a separate device and sends control data to the network device. One benefit of Network APIs is that switches that can push Tbp/s can act like mid-to high-end routers.
This combination of high-performance firewall and load balancing software running on commodity PC hardware along with the ability to off-load traffic onto inexpensive programmable switches is pushing towards large changes in the networking industry.

Values of NFV

Some of the values to the NFV concept are speed, agility, and cost reduction. By centralizing designs around commodity server hardware, network operators can:
  • Do a single PoP/Site design based on commodity compute hardware;
    • Avoiding designs involving one-off installs of appliances that have different power, cooling and space needs simplifies planning.
  • Utilize resources more effectively;
    • Virtualization allows providers to allocate only the necessary resources needed by each feature/function.
  • Deploy network functions without having to send engineers to each site;
    • “Truck Rolls” are costly both from a time and money standpoint.
  • Achieve Reductions in OpEX and CapEX; and,
  • Achieve Reduction of system complexity.

What Is The Status Of NFV?

The Network Function Virtualization committee is planning a kick-off meeting in mid-January in France. Network operators, server manufacturers, and network equipment vendors have committed to being involved, promising constructive discussions about the current state of each industry. The list of vendors involved is very large and growing every day. Once the committee is settled, the base concepts will be worked out.
Action Item: CIOs should watch the NFV and Network API spaces carefully. As the NFV committee develops standards and concepts, a lot of information will come out. CTOs can start looking at Network APIs like OpenFlow and task their network architecture team to look at Open Source projects such as RouteFlow, which has built a full control plane, including routing, into a single machine.
Footnotes: See Defining Software-led Infrastructure for key disruptive technologies (including SDN) that will allow for a simplified and automated next-generation data center.
Steve Noble is Founder and Chief Analyst at Router Analysis, Steven has more than 20+ years of experience designing and running large networks. Since 1996 he has been heavily involved in writing and executing test plans for networking devices. His professional experience includes VP of Technology at XDN Inc, Technical Leader at both Cisco and Procket Networks along with being a Fellow - Network Architecture at Exodus Communications.

NFV dan SDN, apa sih bedanya ?




NFV and SDN: What’s the Difference?


Defined Networking (SDN) and Network Function Virtualization (NFV) are hot topics. They are clearly related, but how exactly are they similar? How are they different? How do they complement each other?

SDN – Born on the Campus, Matured in the Data Center

SDN got its start on campus networks. As researchers were experimenting with new protocols they were frustrated the need to change the software in the network devices each time they wanted to try a new approach. They came up with the idea of making the behavior of the network devices programmable, and allowing them to be controlled by a central element. This lead to a formalization of the principle elements that define SDN today:
  • Separation of control and forwarding functions
  • Centralization of control
  • Ability to program the behavior of the network using well-defined interfaces
The next area of success for SDN was in cloud data centers. As the size and scope of these data centers expanded it became clear that a better way was needed to connect and control the explosion of virtual machines. The principles of SDN soon showed promise in improving how data centers could be controlled.

OpenFlow – Driving Towards Standards

So, where does OpenFlow come into the picture? As SDN started to gain more prominence it became clear that standardization was needed. The Open Networking Forum (ONF) [1] was organized for the purpose of formalizing one approach for controllers talking to network elements, and that approach is OpenFlow. OpenFlow defines both a model for how traffic is organized into flows, and how those flows can be controlled as needed. This was a big step forward in realizing the benefits of SDN.

NFV – Created by Service Providers

Whereas SDN was created by researchers and data center architects, NFV was created by a consortium of service providers. The original NFV white paper [2] describes the problems that they are facing, along with their proposed solution:
Network Operators’ networks are populated with a large and increasing variety of proprietary hardware appliances. To launch a new network service often requires yet another variety and finding the space and power to accommodate these boxes is becoming increasingly difficult; compounded by the increasing costs of energy, capital investment challenges and the rarity of skills necessary to design, integrate and operate increasingly complex hardware-based appliances. Moreover, hardware-based appliances rapidly reach end of life, requiring much of the procure-design-integrate-deploy cycle to be repeated with little or no revenue benefit.
Network Functions Virtualisation aims to address these problems by leveraging standard IT virtualisation technology to consolidate many network equipment types onto industry standard high volume servers, switches and storage, which could be located in Datacentres, Network Nodes and in the end user premises. We believe Network Functions Virtualisation is applicable to any data plane packet processing and control plane function in fixed and mobile network infrastructures.

SDN versus NFV

Now, let’s turn to the relationship between you SDN and NFV. The original NFV white paper [2] gives an overview of the relationship between SDN and NFV:
As shown in Figure 1, Network Functions Virtualisation is highly complementary to Software Defined Networking (SDN), but not dependent on it (or vice-versa). Network Functions Virtualisation can be implemented without a SDN being required, although the two concepts and solutions can be combined and potentially greater value accrued.
NFV-Reference-Diagram
Figure 1. Network Functions Virtualisation Relationship with SDN
Network Functions Virtualisation goals can be achieved using non-SDN mechanisms, relying on the techniques currently in use in many datacentres. But approaches relying on the separation of the control and data forwarding planes as proposed by SDN can enhance performance, simplify compatibility with existing deployments, and facilitate operation and maintenance procedures. Network Functions Virtualisation is able to support SDN by providing the infrastructure upon which the SDN software can be run. Furthermore, Network Functions Virtualisation aligns closely with the SDN objectives to use commodity servers and switches.

SDN and NFV – Working Together?

Let’s look at an example of how SDN and NFV could work together. First, Figure 2 shows how a managed router service is implemented today, using a router at the customer site.
ManagedRouter_Before
Figure 2: Managed Router Service Today
NFV would be applied to this situation by virtualizing the router function, as shown in Figure 3. All that is left at the customer site is a Network Interface Device (NID) for providing a point of demarcation as well as for measuring performance.
Virt_Mgd_Router_Option_1_Generic
Figure 3: Managed Router Service Using NFV
Finally, SDN is introduced to separate the control and data, as shown in Figure 4. Now, the data packets are forwarded by an optimized data plane, while the routing (control plane) function is running in a virtual machine running in a rack mount server.
Virt_Mgd_Router_Option_2_Generic
Figure 4: Managed Router Service Using NFV and SDN
The combination of SDN and NFV shown in Figure 4 provides an optimum solution:
  • An expensive and dedicated appliance is replaced by generic hardware and advanced software.
  • The software control plane is moved from an expensive location (in dedicated platform) to an optimized location (server in a data center or POP).
  • The control of the data plane has been abstracted and standardized, allowing for network and application evolution without the need for upgrades of network devices.

Summary

The table below provides a brief comparison of some of the key points of SDN and NFV.
Category
SDN
NFV
Reason for BeingSeparation of control and data, centralization of control and programmability of networkRelocation of network functions from dedicated appliances to generic servers
Target LocationCampus, data center / cloudService provider network
Target DevicesCommodity servers and switchesCommodity servers and switches
Initial ApplicationsCloud orchestration and networkingRouters, firewalls, gateways, CDN, WAN accelerators, SLA assurance
New ProtocolsOpenFlowNone yet
FormalizationOpen Networking Forum (ONF)ETSI NFV Working Group

References

Tuesday, April 23, 2013

KEGIATAN JOB FAIR (BURSA KERJA) 11 MEI 2013


KEGIATAN JOB FAIR (BURSA KERJA)
GKI Klasis Jakarta Timur akan menyelenggarakan acara Job Fair, pada tgl. 11 Mei 2013, pk. 08.00 – selesai, bertempat di Kantin SMPK Penabur, Jl. Boulevard Raya kav. 21, Harapan Indah, Bekasi. Perusahaan yang berminat untuk ikut serta dalam acara tersebut dapat menghubungi: Ibu Dyah (0813-1555.1294), Email: permata_dyah67@yahoo.com, Bpk Tony Lie (0812-8670589), atau ke Tata Usaha Gereja. Job Fair ini akan diikuti oleh banyak perusahaan dan tersedia lowongan kerja bagi lulusan SD, SMP, SMA/SMK, D1, D2, D3 dan S1 atau yang sederajat.

Selamat kepada GSK atas selesainya implementasi ServiceDesk Plus

Selamat kepada GSK Group yang telah selesai mengimplementasikan ServiceDesk Plus.

ServiceDesk Plus, aplikasi software helpdesk terdepan, saat ini ditambah solusi Project Management yang memudahkan kita memonitor pelaksanaan pekerjaan-pekerjaan dalam manajemen proyek.


A stand alone Project management tool to deploy an IT task is like having an override switch to bypass what is working now. End of the day the purpose of Project management is to make large projects streamlined and to make your tasks easier to handle. Project Management module in ServiceDesk Plus is a perfectly integrated solution which combines IT Help Desk with Project Management.
With Project Management integrated with Service Desk Plus, managing projects is simpler for IT Admins. Irrespective of the size of the project, you can easily track and manage any project. It is an amazing way to collaborate with various teams and experts.
Each Project is divided into Milestones and each milestone is subdivided into Tasks. These phases have a flexible structure and are given equal importance. IT Administrators can easily set roles, provide access permissions and collaborate with other project members. You can track the status of tasks with the help of Gantt Charts. Check out the complete list of what Project Management has to offer below.

Features

 
  • Get complete details of any project - Project Type, Start date, End date, Projected date, Estimated cost, etc. from the list view
  • Assign Project Roles and provide access permissions to different Project Members
  • Divide Projects into Milestones and Milestones into Tasks
  • Directly associate Tasks with Projects
  • Allows for Task dependency configuration
  • Project Overview Map
  • Estimate costs and track with work logs and timesheets
  • Tracking progress using color coded Gantt Charts and Calendar View

Benefits

 
  • Switch seamlessly with Helpdesk and Project Modules
  • No more juggling with Multiple Tools and Products
  • Centralized Dashboard for Projects, Milestones and Tasks
  • Effective Resource Management
  • Efficient Time Management
  • Minimal Cost Expenditure
  • Streamlines project creation & tracking
  • Task Dependency Map for efficient handling of Tasks

Monitoring beban kerja di cloud..




Monitoring cloud workload activities

Cloud managers work within a distributed WAN computing infrastructure; one of the biggest shifts from the traditional data center is that all data is stored, managed and administered in a private cloud. Effective cloud-based workload monitoring can capture performance issues before they happen. Knowing how your cloud is behaving allows you to deliver a more powerful cloud computing experience.
Gathering cloud performance metricsIT admins must actively gather and log cloud-facing server performance metrics and data, especially since most servers that host cloud workloads are virtual machines (VMs) that require dedicated resources. Over-allocating or under-allocating resources to cloud servers can be a costly mistake.
Over-allocating or under-allocating resources to cloud servers can be a costly mistake.
Proper planning and workload management is necessary prior to any major cloud rollout. When gathering performance metrics about specific servers running dedicated workloads, admins must evaluate the following details:
  • CPU usage: The cloud-facing server could be physical or virtual. Administrators must look at that machine and determine how users are accessing CPU resources. With numerous users launching desktops or applications from the cloud, careful consideration must be made as to how many dedicated cores this server requires.
  • RAM requirements: Cloud-based workloads can be RAM-intensive. Monitoring a workload on a specific server allows you to gauge how much RAM to allocate. The key is to plan for fluctuations without over-allocating resources; you can do this through workload monitoring. By looking at RAM use over a period of time, administrators can determine when usage spikes will occur as well as appropriate RAM levels.
  • Storage needs: Sizing considerations are important when working with a cloud workload. User settings and workload location all require space. I/O should also be examined. For example, a boot storm or massive spike in use can cripple a SAN that’s unprepared for such an event. By monitoring I/O and controller metrics, administrators can determine performance levels specific to storage systems. You can use solid-state disks (SSDs) or onboard flash cache to help prevent I/O spikes.
  • Network design: Networking and its architecture play a very important role in a cloud infrastructure and its workload. Monitoring network throughput within the data center as well as in the cloud will help determine specific speed requirements. Uplinks from servers into the SAN through a fabric switch that provides 10 GbE connectivity can help reduce bottlenecks and help improve cloud workload performance.
Performance monitoring tools are also useful. Citrix Systems Inc.’s EdgeSight for Endpoints gathers performance metrics at the server and the end-point level. By understanding how the cloud server is operating and knowing end-user requirements, administrators can size physical infrastructure properly to support virtual instances.

Advantages of workflow automation
Active cloud workload monitoring goes beyond gathering metrics and statistics. Many systems monitor workloads and provide workflow automation in the event of a usage spike.
Active cloud workload monitoring goes beyond gathering metrics and statistics.
Certain markets, like the travel industry, experience usage spikes during particular periods of the year. To prepare for this, workload thresholds are set so new VMs can be spun up as soon as demand increases. Therefore, end users will always have access to data and normal workloads without degradation in performance.
Workflow automation also helps with disaster recovery and backup. As data replication occurs between numerous sites, a remote location can spin up identical workloads if another site experiences data loss. Proper workload monitoring and data center design can help increase system stability and, more importantly, business continuity.

Cloud monitoring tips
Here are a few rules to help maintain the health of your private cloud workloads:
Know your physical resources. Even though physical resources may seem endless initially, they have specific limits. Without properly monitoring and gauging these resources, they can be depleted very quickly. Cloud workloads can be demanding. Planning is a must.
Keep active logs
. In addition to actively monitoring a cloud workload, cloud managers should log how this workload or server is performing over a period of time. Cloud servers can be upgraded and workloads can be migrated from one physical host to another. In these situations, knowing how well specific server sets operate compared to older server sets can help to calculate total cost of ownership and return on investment. In many situations, good performance logs can supply the statistical information needed to justify an increase in data center budgets.

Monitor end points
. From the data center’s perspective, engineers are able to monitor and manage active workloads. It’s also very important to monitor workload activities at the end point. By knowing how the workload is being delivered and how well it is being received, IT teams can create a more positive computing experience.
As a user accesses a workload in the cloud, admins have insight into which type of connection they’re using, how well data is traveling to the end point and if any modifications should be made. In some instances, admins may want to apply data compression or bandwidth optimization techniques to enable the workload to function properly at the end point.

Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is a Virtualization Solutions Architect at MTM Technologies, a national IT consulting firm.

Menyiapkan tim Anda untuk SDN




Preparing your teams for software defined networking

Andre Kindness

Infrastructure and operations (I&O) teams are aligning themselves and infrastructure around key workloads to drive greater simplicity and efficiency. In kind, the networking industry has responded by suggesting that networks can provide greater support for this approach using the OpenFlow protocol and Software Defined Networking (SDN) concepts.
I believe the SDN definition today equates to adding closed loop functionality so a network can intelligently orchestrate a set of services.  This architecture consists of three components: an automation and orchestration controller, a monitoring and data collection system, and a configuration and management system.
SDN provides the means to automate networks to better support different workloads, but I&O professionals also need to understand how SDN can support turning networks into a virtual network infrastructure.  
Whatever technology options networking professionals choose, the value cannot be extracted without preparation. Creating a workload centric infrastructure to serve the business requires the infrastructure to become standardised, self-service and pay-per-use, giving users rapid access to powerful and more flexible IT capabilities.
This, in turn, means I&O teams need to coordinate infrastructure elements, such as switches, firewalls, load balancers and optimisers, to deliver the right set of services to the right user, at the right time and at the right location. Workload centric networks will reconfigure these elements on the fly and monitor the output to ensure that the newly created services are within the bounds of the business policies and rules.
Our recommendations to I&O leaders for what SDN requires follow.  
Standardise your process, procedures, roles and responsibilities
I&O need to have a baseline in order to automate infrastructures. With this in place, you can then make changes to optimise the workload performance and user experience. In our network assessment engagements with clients, we find that this is consistently an underdeveloped area. A scant few have standards documents for their config files, products, firmware or architecture. You need to start on the refining processes to assess the current state of key operational and process activities and then standardise processes and skills around ITIL.

Invest in tools that empower other I&O teams to utilise the network
Network teams are already stretched too thin to be responsible for every networking decision. For example, there’s little value in the network team handing out IP addresses every time new apps are loaded. Advanced DNS, DHCP and IPAM tools providing workflow processes for a set of IP addresses can be assigned to a server team, who then can grab them as needed. This eliminates waste and repetitious activities which take away from focusing on higher technical skills like using SDK kits to hook orchestration systems into network management software.

Add a software network engineer position
With today's mounting business requirements and escalating technology complexity, manual control is dead. With all of the variability in users, devices and services, plug-and-play operating systems must supersede command line interfaces in the network. Network software engineers can help create these operating systems with C+, Software Development Kits (SDK) and application programming interfaces (APIs) such as OpenFlow, using them to fuse distributed systems, virtualisation, data storage etc. This enables developers to build network applications that can integrate and interact with networking gear by manipulating switch tables or using Layer 2 and Layer 3 protocols such as Link Aggregation Control Protocol (LACP), spanning tree protocol (STP), Rapid STP, virtual redundant routing protocol (VRRP) and 802.1x. Infrastructure monitoring is absolutely critical to support this closed-loop system.

Push the networking teams to start managing the virtual world
Workload centric infrastructure support requires that both physical and virtual infrastructures work together. Forrester has found that few networking organisations manage virtual switches, firewalls, application load balancers or wan optimisers. Virtual networking is an extension of the physical world; all the concepts remain the same. To help your teams overcome the fear of the unknown, transition the management of virtual switches from hypervisor administrators to networking personnel. They can choose between managing either the hypervisor virtual switch or a networking vswitch, like Cisco’s 1000v, IBM’s 5000v or an OpenvSwitch, or the networking administrator can manage the hypervisor virtual switch.

Deploy more monitoring tools
After standardisation and any automation or adoption of workload centric infrastructure requires visibility to the workloads and processes so the system knows what happens when adds, moves and deletions are made. This is a fundamental requirement in closed loops systems. Monitoring tools and solutions need to move from a ‘nice to have’ to a ‘must have’ before you can derive value from virtual network infrastructures supporting workloads.  I&O should be asking themselves how they will be monitor each app, software, and hardware deployment.
Andre Kindness (pictured) is senior analyst at Forrester Research where he serves IT infrastructure and operations professionals. He is a leading expert on network operations and architecture.

SDN dan masa depan Manajemen Network Dinamik



SDN and the Future of Dynamic Network Management

Bill Trolley
Bruce TolleyVice President of Solutions and Outbound Marketing
Solarflare
SDN is garnering a lot of media attention of late, but there are still many unknowns regarding how radical a change it will bring to existing network architectures. The very notion of separating the control and data planes opens up the possibility of highly dynamic network environments that can be instantly optimized for individual application and user requirements. But with much of the technology still on the drawing board, it can be difficult to separate fact from fiction. In a conversation with IT Business Edge’s Arthur Cole, Solarflare’s vice president of solutions and outbound marketing, Bruce Tolley, offers insight into what is real, and what is merely possible.
Cole: Software-defined networking is on a roll these days, but it has barely made an impact on production environments. What are the main challenges in bringing SDN to the mainstream?
Tolley: Software-defined networking (SDN) is an approach to building computer networks that separates and abstracts elements of the network systems into the control plane and the data plane. The control plane manages switch and routing tables while the forwarding plane performs the Layer 2 and 3 filtering, forwarding and routing. SDN decouples the system that makes decisions about where traffic is sent, the control plane, from the underlying system that forwards traffic to the selected destination, the data plane.
SDN promises to simplify networking and enable new applications, such as network virtualization, in which the control plane is separated from the data plane and implemented in a software application. While mainly driven by the data center architects at the big Web 2.0 companies, this architecture allows enterprise IT managers to have programmable central control of network traffic without requiring physical access to the network's hardware devices. Many SDN advocates look to the example of the Linux community and the open source movement as an example of how users and customers can drive software innovation on top of an ecosystem of merchant silicon. In the case of Linux, Intel, AMD, and customers of ARM deliver the merchant silicon.
A limitation of SDN solutions today is that they are switch-centric and do not extend to the server endpoints, let alone the application interface. Solarflare’s technology is capable of extending an SDN solution from the edge switch all the way to the application interface because we provide a unique server endpoint solution. Our server adapters include kernel bypass via our OpenOnload stack.  OpenOnload provides direct access to the application and provides an open control plane that controls data flows directly to the application. This enables some truly unique capabilities in an SDN solution.
Cole: Is it crucial that the entire SDN universe revolve around a single standard like OpenFlow? How would a multi-protocol virtual network function?
Tolley: To mix metaphors, the brave new world of SDN will not be created in seven days and can only come into existence based on robust, open standards.
Arguably the first step toward SDN is the deployment of OpenFlow-enabled devices. OpenFlow is a communications protocol that gives access to the forwarding plane of a network switch or router over the network. Put simply, OpenFlow allows the path of network packets through the network of switches to be determined by software. This separation of the control intelligence from the forwarding allows for more sophisticated traffic management than is feasible using access control lists (ACLs) and routing protocols. This being said, there is more to SDN than just OpenFlow. The Open Networking Foundation has task groups working on the multiple projects such as extensibility, configuration and management, testing and interoperability, architecture and framework, and forwarding abstractions.
Networks of virtual switches are already being built today by customers using the various virtual operating systems on the market from VMware, Citrix, Redhat KVM, etc. The functionality of these networks is primarily packet forwarding and filtering. The promise of OpenFlow and SDN is that the control plane that manages virtual and bare metal switches can be distributed and the controller can be a standalone controller device, a virtual machine in a hypervisor, or embedded in the switch in the Ethernet NIC itself. Having an OpenFlow-capable server adapter and using OpenOnload can extend this capability to the application layer, enabling a true end-to-end SDN.
Cole: What, then, are the most crucial steps the enterprise needs to take now to lay the groundwork for SDN?
Tolley: The network architects at the big enterprises are invited to join the various groups driving standards for software-defined networking. There are also several efforts under way to show the beef. Today, technology leaders at customer and research consortia in the U.S., Europe and Asia are beginning to evaluate vendors in order to build OpenFlow networks for test beds and proof of concept (POC) testing.  A key goal of these POCs is to demonstrate that a multivendor OpenFlow network can perform under typical business loads.  For example, Solarflare partnered with NEC two years ago to demonstrate an OpenFlow SDN network using Solarflare’s server adapter and NEC’s network switch operating with a common control plane for all devices on the network. This was an early demonstration but showed the promise of this approach.
Therefore, these customers are looking for the tools often lumped in the category of network TAPs to provide precise visibility into network conditions without having to spend money on additional hardware infrastructure that often compromises performance. To put it simply, data center managers cannot improve the performance of systems they cannot measure and they need to prove the performance of these software-defined networks to start building the transition to this brave new world.
To point to some performance analysis tools that are available today, Solarflare has partnered with TS Associates to deliver a sophisticated application-level monitoring and analytics tool. The TS Associates Application Tap for Solarflare delivers insights into the performance dynamics of real-time applications.  Solarflare has also partnered its SolarCapture software with Arista's DANZ technology to ensure fine-grained visibility and traffic monitoring across an SDN network. With SolarCapture, any server can be turned into a performance monitoring tool with very little capex investment.