Tuesday, December 31, 2013

Why Network Functions Virtualization (NFV)??

         
Network Functions Virtualization (NFV) explicitly targets the two biggest problems facing network operators: bringing costs in line with revenue growth expectations and improving service velocity. NFV's premise is that industry standard IT virtualization technology (servers, switches, and storage) located in data centers, network nodes, or end-users' premises can be used to reduce the cost and increase the speed of service delivery for fixed and mobile networking functions. We believe Network Functions Virtualization is applicable to any data plane packet processing and control plane function in fixed and mobile network infrastructures.

What Makes NFV Different
While PC-based network devices have been available since the '80s, they were generally used by small companies and networking enthusiasts who didn't or couldn't afford to buy a commercial-based solution. In the last few years many drivers have brought PC-based networking devices back into the limelight, including: Ethernet as the last mile, better network interface cards, and Intel's focus on networking processing in its last few generation of chips.
Today many vendors are producing PC-based network devices. Advancements in packet handling within Intel's processors, allowing processor cores to be re-programmed into network processors, allow PC-based network devices to push 10's or even 100's of Gbp/s.

                          Values of NFV
Some of the values to the NFV concept are speed, agility, and cost reduction. By centralizing designs around commodity server hardware, network operators can:
Do a single PoP/Site design based on commodity compute hardware;
Avoiding designs involving one-off installs of appliances that have different power, cooling and space needs simplifies planning.
Utilize resources more effectively;
Virtualization allows providers to allocate only the necessary resources needed by each feature/function.
Deploy network functions without having to send engineers to each site;
o “Truck Rolls” are costly both from a time and money standpoint.
Achieve Reductions in OpEX and CapEX; and,
Achieve Reduction of system complexity i.e Deliver Agility and Flexibility: quickly scale up or down services to address changing demands; support innovation by enabling services to be delivered via software on any industry-standard server hardware
Accelerate Time-to-Market: reducing the time to deploy new networking services to support changing business requirements, seize new market opportunities and improve return on investment of new services. Also lowers the risks associated with rolling out new services, allowing providers to easily trial and evolve services to determine what best meets the needs of customers.
Error resilience : Ensuring the appropriate level of resilience to hardware and software failures.
Scale: Network Functions Virtualization will only scale if all of the functions can be automated.
                                  Overview of the ETSI NFV ISG

The ETSI Board approved foundation of the NFV ISG in time for publication of our first white paper last October. ETSI is a global organisation and has proved to be an excellent environment in which to Network Functions virtualization progress our work and we extend our thanks to the Director General and the ETSI Board for their accommodation and support. Although ETSI is a Standards Development Organisation (SDO), the objective of the NFV ISG is not to produce standards. The key objectives are to achieve industry consensus on business and technical requirements for NFV, and to agree common approaches to meeting these requirements. The outputs are openly published and shared with relevant standards bodies, industry fora and consortia to encourage a wider collaborative effort. The NFV ISG will collaborate with other SDOs if any standardization is necessary to meet the requirements. The NFV ISG also provides an environment for the industry to collaborate on Proof of Concept (PoC) platforms to demonstrate solutions which address the technical challenges for NFV implementation and to encourage growth of an open ecosystem


NFV: Vision


NFV architecture Model


  •     Network operators have proven NFV feasibility via proof of concept test platforms 
  •     Network operators and vendors have identified numerous “fields of application” spanning all domains (fixed and mobile network infrastructures)
  •     Significant CAPEX/OPEX benefits, leveraging also the economies of scale  
  •     Emerging virtual network appliance market 
  •     Novel ways to architect and operate networks, spawning a new wave of industry wide innovation 
  •     Network Functions Virtualization can dramatically change the telecom landscape and industry over the next 2-5 years






NFV Relationship with Software Defined Networks (SDN)

Whereas SDN was created by researchers and data center architects, NFV was created by a consortium of service providers. The original NFV white paper[1] describes the problems that they are facing, along with their proposed solution, Link1 to white paper has complete information about this.

SDN and NFV – Working Together?

Let’s look at an example of how SDN and NFV could work together. Figure shows how a managed router service is implemented today, using a router at the customer site.
Network Functions virtualizations goals can be achieved using non-SDN mechanisms, relying on the techniques currently in use in many datacenters. But approaches relying on the separation of the control and data forwarding planes as proposed by SDN can enhance performance, simplify compatibility with existing deployments, and facilitate operation and maintenance procedures. Network Functions virtualization is able to support SDN by providing the infrastructure upon which the SDN software can be run. Furthermore, Network Functions Virtualization aligns closely with the SDN objectives to use commodity servers and switches..

Managed Router Service Today

NFV would be applied to this situation by virtualizing the router function, as shown in Figure. All that is left at the customer site is a Network Interface Device (NID) for providing a point of demarcation as well as for measuring performance.

 Managed Router Service Using NFV

Finally, SDN is introduced to separate the control and data, as shown in Figure. Now, the data packets are forwarded by an optimized data plane, while the routing (control plane) function is running in a virtual machine running in a rack mount server.


Managed Router Service Using NFV and SDN
The combination of SDN and NFV shown in Figure provides an optimum solution:
·         An expensive and dedicated appliance is replaced by generic hardware and advanced software.
·         The software control plane is moved from an expensive location (in dedicated platform) to an optimized location (server in a data center or POP).
·         The control of the data plane has been abstracted and standardized, allowing for network and application evolution without the need for upgrades of network devices.

 

How NFV will push SDN beyond the data center

NFV's use of virtual network overlays could also drive an expansion of this SDN model beyond the data center where it's focused most often today. If NFV allows services to be composed of virtual functions hosted in different data centers, that would require virtual networks to stretch across data centers and become end-to-end. An end-to-end virtual network would be far more interesting to enterprises than one limited to the data center. Building application-specific networks that extend to the branch locations might usher in a new model for application access control, application performance management and even application security.

Will NFV unify differing SDN models?

With the use of network overlays, NFV could also unify the two models of SDN infrastructure -- centralized and distributed. If connectivity control and application component or user isolation are managed by the network overlay, then the physical-network mission of SDN can be more constrained to traffic management. If SDN manages aggregated routes more than individual application flows, it could be more scalable. Remember that the most commonly referenced SDN applications today -- data center LANs and Google's SDN IP core network -- are more route-driven than flow-driven. Unification of the SDN model might also make it easier to sort out SDN implementations. The lower physical network SDN in this two-layer model might easily be created using revisions to existing protocols, which has already been proposed. While it doesn't offer the kind of application connectivity control some would like, that requirement would be met by the higher software virtual network layer or overlay.
Why NFV is the future?
·         Recent tests by network operators and vendors have demonstrated that network functions can operate at the level of several millions of packets per sec, per CPU core
·         Demonstrates that standard high volume servers have sufficient processing performance to cost-effectively virtualized network appliances
o    The hypervisor need not be a bottleneck
o    The OS need not be a bottleneck
·         Total Cost of Ownership advantages are a huge driver – could be scenario specific but expect significant benefits, e.g., energy savings
·         Advances in virtualization & server technologies have propelled the importance and use of software in many applications and fields
·         A concerted industry effort is underway to accelerate this vision by encouraging common approaches which address the challenges for NFV

Fields of Application (examples)
·         Application-level optimisation: CDNs, Cache Servers, Load Balancers, Application Accelerators
·         Mobile networks: HLR/HSS, MME, SGSN, GGSN/PDN-GW, Base Station, EPC
·         Home environment: home router, set-top-box
·         Security functions: Firewalls, intrusion detection/protection systems, virus scanners, spam protection
·         Tunnelling gateway elements: IPSec/SSL VPN gateways
·         Traffic analysis/forensics: DPI, QoE measurement
·         Traffic Monitoring, Service Assurance, SLA monitoring, Test and Diagnostics
·         NGN signalling: SBCs, IMS
·         Converged and network-wide functions: AAA servers, policy control and charging platforms
·         Switching elements: BNG, CG-NAT, routers

 

Summary

The tabl below provides a brief comparison of some eof the key points of SDN and NFV.
Category
SDN
NFV
Reason for Being
Separation of control and data, centralization of control and programmability of network
Relocation of network functions from dedicated appliances to generic servers
Target Location
Campus, data center / cloud
Service provider network
Target Devices
Commodity servers and switches
Commodity servers and switches
Initial Applications
Cloud orchestration and networking
Routers, firewalls, gateways, CDN, WAN accelerators, SLA assurance
New Protocols
OpenFlow
None yet
Formalization
Open Networking Forum (ONF)
ETSI NFV Working Group


PUBLICATION DATE
Dec 30, 2012.
Author

Rajeev Tiwari Principal Software Engineer Technicolor 


Sunday, December 29, 2013

H.265 vs VP9

They are competing next generation video compression formats that claim to be twice as efficient as H.264, the current industry standard. They will be crucial in getting 4K ‘Ultra HD’ content to our televisions, PCs and tablets over the next few years. They also halve the file size of 720p and 1080p content making it far easier to download or stream HD video over slow connections.
Performance compression at following IEEE paper
http://iphome.hhi.de/marpe/download/Performance_HEVC_VP9_X264_PCS_2013_preprint.pdf


Some things to note from this video:
• Google is planning to introduce VP9 to WebRTC by year-end
• VP9 already accounts for 60% of all YouTube videos delivered
• 100 hours of video gets uploaded to YouTube every minute
This shows the ubiquity of VP9 in both encoding and decoding, and how this is used outside of labs and research centers in real, live production systems today.
Competition from H.265, on the other hand, is rather late to the party. The dominant MPEG camp is huddled around H.264 these days, with spotty support of H.265 here and there. The only references I could find for H.265 include a Netflix announcement of planned support of H.264 in 4K resolutions and many codec vendors coming up with H.265 support.

H.265 and VP9 support 8K content as well and with physical media on the wane, this makes them quite frankly the future of television and video, which is why they're so important.

H.265 and VP9 support 8K content as well and with physical media on the wane, this makes them quite frankly the future of television and video, which is why they're so important.
H.265 was originally developed as the ‘HEVC’ (High Efficiency Video Coding) format jointly by the Video Coding Experts Group (VCEG) and the Moving Picture Experts Group (MPEG). It was approved as the official successor to H.264 in April 2013. Like H.264, the codec must be licenced with hardware manufacturers and software developers paying a fee.

By contrast VP9 is open source and royalty free. It was developed by Google as a successor to VP8, the moderately successful alternative to H.264. During its development VP9 was dubbed ‘NGOV’ (Next Gen Open Video) and Google has already integrated support into the Chrome browser and YouTube.

How do they work?

By doing the opposite of what you might expect. While 4K video increases picture quality by making individual pixels smaller, effectively what H.265 does is make them bigger to reduce the bitrate (and therefore file size). It then performs a vast array of processing tricks on the video as it is played to get the detail back.

blocks

For context H.264 could grab a 16x16 ‘macroblock’ of pixels and perform nine ‘intra-prediction directions’ – aka educated guesses – that allowed the pixels to be rebuilt within each block. H.265 can grab 64x64 ‘superblocks’ and perform 35 infra-prediction directions to rebuild them. Like H.264, H.265 varies the size of blocks it takes. For example, it would take much smaller blocks (down to 4x4 pixels) around detailed areas like facial features and much bigger blocks of the sky or a relatively plain background.
complexityVP9 is similar on the surface. It can also take 64x64 superblocks, but unlike H.265 these don’t need to be square so it can sample 64x32 or 4x8 blocks for greater efficiency. On the flip side it only has 10 prediction modes to rebuild them. Cynics argue VP9 changes H.265 just enough for it to avoid copyright infringement.

Needless to say both standards require more computational power than H.264 and VP8 for all this rebuilding. But given the increase in computing power since those formats were launched in 2003 and 2008 respectively, this isn’t a great problem.

Which is better?



The first thing to say is we are greatly simplifying these formats, but – despite similar file sizes - initial reports suggest H.265 has higher image quality while VP9 is more reliable for streaming.

The greater prediction modes in H.265 are what give it the edge visually, while VP9 enforces stricter rules on decoding which appears to make streams more consistent and reliable. This would make sense given the focus of the standards’ respective creators, though officially both sides dispute there is any downside to their format.

Who is supporting what?



H.265 versus VP9 is a little like HDMI versus DisplayPort in that the latter’s royalty free approach should give it the edge, but the former’s ubiquitous legacy means it has widespread industry support. Previously this made H.264 an easy winner over VP8.

This time around things are closer. Google used CES 2014 to show VP9 has support from LG, Panasonic, Sony, Samsung, Toshiba, Philips, Sharp, ARM, Intel, Nvidia, Qualcomm, Realtek Semiconductor and Mozilla. As mentioned, Google has also built VP9 support into its Chrome browser and YouTube.

The flip side is all these companies have also backed H.265 and even Google will support it in Chrome and hasn’t ruled out YouTube support. In fact, this led to an amusing quote from Francisco Varela, YouTube global head of platform partnerships, that "We are not announcing that we will not support HEVC."

Consequently most companies look like they will support both formats, much like you’d be hard pressed to find an audio player that doesn’t support both MP3 and AAC.

Tuesday, December 17, 2013

SDN, NFV Deployments,Cloud, BigData in Top Technology trends for 2014


Top Technology trends we should watch for in early 2014.  Network Function Virtualization (NFV) and Software Defined Networking will see an increase in terms of deployments, while Cloud-based (new tern open-cloud)services will continue to rise and Big Data will experience exponential growth.

Big Values in 2014

1) Network Functions Virtualization and Software-Defined "Everything" Will Gain Momentum: Globally, and specifically in Asia-Pacific and Japan, exploration of Network Functions Virtualization (NFV) and software-defined technologies (network, virtualization, data center, storage and infrastructure) will evolve from being simply "research," and enterprises -- particularly in the service provider space -- will begin to roll out production deployments. 

2) Trimming the so called heavily loaded Data Centers and Cloud Architectures: Technology disruptions are forcing us to fundamentally rethink how networks should be architectured, designed, deployed and operated in data centers. Networks are more critical than ever to deliver applications, and we believe fabrics will play a pivotal role to accelerate this transformation -- with drastic improvements in network efficiency, resource utilization and performance.

3) Open/Personal Clouds Loom Large: According to Gartner, the push for more personal cloud technologies will lead to a shift toward services and away from devices.  

4) The Internet Revolution Continues Unabated: Robert Metcalfe, the inventor of Ethernet, states that the power of a network increases by the square of the number of nodes connected to it. 

5) Big Data Becomes Too Big to Handle: BYOD and the explosion of data (especially video content) are causing many new challenges as the amount of data becomes too big to handle in terms of getting value from it and in defining a strategy. 

6) Gamification in the enterprise
Mobile app developers and Web developers have already incorporated game mechanic elements into their experiences. For instance, billion dollar real-time traffic app Waze has a strong gamification element. Users receive points, recognition and rewards for proper usage. Expect the same in the enterprise in 2014. Springshot, a West Coast-based startup, is developing a mobile gamification solution for the aviation industry. Companies like Spotify have already introduced gamification to replace annual reviews. Instead, employees use a Web application that provides weekly and daily summaries of their accomplishments, acknowledgements, and overall productivity.

7) Real-time mobile targeting
With the continual evolution of smartphones, 2014 will see mobile technology take new forms as it becomes a central tool for business’ marketing and advertising strategies


Sunday, December 8, 2013

Vine vs. Instagram?

What’s better and where?




While Vine was the first video platform to boom, Instagram launched their own video capabilities just a few months after Vine came to market. Not only did Instagram’s launch of video functionality place them as a direct competitor to Vine, but they one-upped Vine by introducing 15 second video, compared to Vine’s measly 6 seconds.
One of the big questions in the social media world since Instagram video came out is, which better; Vine or Instagram? The answer to this question lies within the personal of the user. There are pros and cons to each platform and each of these pros and cons lead to the ultimate decision that the user will make. If this is a question that still bounces around in your mind, here’s a list of pros and cons to ease your mind a bit:
Why Vine?
§  A six second time limit forces a Vine user to think outside the box and be as  innovative as possible
§  Vine constantly loops the video, so if you miss something you can watch it again.
§  Twitter and Vine are linked together, so when you post something on Vine it automatically goes to your Twitter page, too.
Why Instagram?
§  Instagram has a longer recording time (15 seconds) which allows more time to film a typical TV spot.
§  The app also offers more creativity  offers more creativity with filters and editing features that enhance the display of the video.
§  Instagram also allows to share on not just Facebook and Twitter, but also Tumblr, Flickr and Foursquare.
Not So Good About Vine:
§  Vine lacks editing tools and style filters.
§  You can’t pull prerecorded videos from camera roll.
§  Vine’s user numbers have decreased 2.9 million to 1.35 million, a 50 percent decrease.
Troubles With Instagram:
§  Instagram doesn’t have the looping option for videos.
§  Just like Vine, Instagram doesn’t support pulling prerecorded videos from camera roll.
§  Because of the 15 second time limit, there are complaints of the video taking too long to load.
With these pros and cons in mind, a user now has a better chance of making a decision that is in line with their personal preference. Whatever the choice, with practice and proper use, you’ll become a video pro in no time!
Description in more detail:
Description in more detail
Here’s a look at some of the biggest ways the apps diverge:
Length: The biggest distinguishing factor between these two services is the amount of time they allow for clips. Instagram offers users 15 seconds to Vine’s six — two-and-a-half times the video fun. The longer time-limit is supposed to make it easier for more people to shoot videos, since you don’t have to ration your time quite as jealously as you do with Vine.
More time is not always a good thing. If you have something really fun to film, then it gives you a lot of room to run. But if the video’s boring, fifteen seconds can seem like a lifetime. (At least a waste of time.)
But asking people to take more time to watch your content means there could actually be a higher bar for what makes a good video on Instagram — or at least a lower tolerance threshold for subpar work.
Looping: Vine’s looping is one of its most unique features, tapping into the .gif-sharing culture and providing a good platform for animation. You wouldn’t want videos much longer than six seconds to play on repeat, so it makes sense that Instagram didn’t follow suit with a similar format.
Still, there’s something charming about the loop. The best Vines actually improve on their second or third rewatch, and there’s certainly a thoughtful crowd out there that takes pride in making clips that flow well from beginning to end.
Instagram videos, on the other hand, require users to be thoughtful in a more traditional way — basically, making sure that what they’re posting is worth the time to watch it.
 Shooting: Shooting on the two apps is similar, but not identical. On Vine, you can hold your finger down anywhere on the screen to start recording. With Instagram, you have to hit a dedicated button on the screen. It’s big and red, but it’s still not quite as easy to use when shooting.
The trade-off, however, is that Instagram lets you tap-to-focus in the midst of your shooting, opening up the intriguing possibility of switching the action from background to foreground and vice-versa.
Instagram also includes a nifty feature that lets you stabilize your video after you shoot it — again, a feature that makes sense when dealing with longer clips.
Editing: Instagram brings two new additions to the editing table that Vine doesn’t have — the ability to delete and the option to add one of its signature filters to your videos.
Being able to delete is a good touch, particularly if Instagram is looking for a more thoughtful phone video crowd. And filters can cover up a multitude of lighting and shooting sins, even if they can’t make your video any more entertaining.
Not to be outdone, Vine may be looking to deal with bloopers in the future as well. Ahead of Facebook’s announcement, Vine released a short video of a phone running a version of Vine that apparently can save drafts — a hint of what may be coming in the future.
Convenience: Instagram’s video comes as a mode within the existing app, while Vine stands alone. It’s a smart move for Instagram, given that it means they already have a video app with 130 million monthly users.
That could be the result of the lesson Facebook learned from its self-destructing Poke video app, a separate app that has failed to pick up many users. On the other hand, having a stand-alone app means that you’re fewer taps away from making a quick video in the moment.
Although Instagram and Vine are comparable in many ways (how to shoot video, the ability to add a text description using hashtags, and the ability to upload and share videos across multiple social media platforms), there are a number of differences between the two platforms:
                                                



Friday, December 6, 2013

Network latency & packet-loss simulation and-bandwidth on MAC

Sometimes while testing you may want to be able to simulate network latency, or packet loss, or low bandwidth. I have done this with Linux and tc/netem as well as with Shunra on Windows.

It turns out that Mac OSX includes ‘dummynet’ from FreeBSD which has the capability to do this WAN simulation.

Here is a quick example:
  • Inject 250ms latency and 10% packet loss on connections between my workstation and my development web server (10.0.0.1)
  • Simulate maximum bandwidth of 1Mbps
# Create 2 pipes and assigned traffic to and from our webserver to each:
$ sudo  ipfw add pipe 1 ip from any to 10.0.0.1
$ sudo  ipfw add pipe 2 ip from 10.0.0.1 to any


# Configure the pipes we just created:
$ sudo ipfw pipe 1 config delay 250ms bw 1Mbit/s plr 0.1
$ sudo ipfw pipe 2 config delay 250ms bw 1Mbit/s plr 0.1

A quick test:
$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: icmp_seq=0 ttl=63 time=515.939 ms
64 bytes from 10.0.0.1: icmp_seq=1 ttl=63 time=519.864 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=63 time=521.785 ms
Request timeout for icmp_seq 3
64 bytes from 10.0.0.1: icmp_seq=4 ttl=63 time=524.461 ms
Disable:
$sudo ipfw list |grep pipe
  01900 pipe 1 ip from any to 10.13.1.133 out
  02000 pipe 2 ip from 10.13.1.133 to any in
$ sudo ipfw delete 01900
$ sudo ipfw delete 02000


# or, flush all ipfw rules, not just our pipes
$ sudo ipfw -q flush
Notice that the round-trip on the ping is ~500ms. That is because we applied a 250ms latency to both pipes, incoming and outgoing traffic. Our example was very simple, but you can get quite complex since “pipes” are applied to traffic using standard ipfw firewall rules. For example, you could specify different latency based on port, host, network, etc. Packet loss is configured with the “plr” command. Valid values are 0 - 1. In our example above we used 0.1 which equals 10% packetloss. This is a very handy way for developers on Mac’s to test their applications in a variety of network environments.

Tuesday, November 26, 2013

Why Software Defined Networking (SDN)??


Software-defined networking (SDN) is an approach to networking in which control is decoupled from hardware and given to a software application called a controller. When a packet arrives at a switch in a conventional network, rules built into the switch's proprietary firmware tell the switch where to forward the packet. The switch sends every packet going to the same destination along the same path -- and treats all the packets the exact same way. In the enterprise, smart switches designed with application-specific integrated circuits (ASICs) are sophisticated enough to recognize different types of packets and treat them differently, but such switches can be quite expensive. The goal of SDN is to allow network engineers and administrators respond quickly to changing business requirements. In a software-defined network, a network administrator can shape traffic from a centralized control console without having to touch individual switches. The administrator can change any network switch's rules when necessary -- prioritizing, de-prioritizing or even blocking specific types of packets with a very granular level of control. This is especially helpful in a cloud computing multi-tenant architecture because it allows the administrator to manage traffic loads in a flexible and more efficient manner. Essentially, this allows the administrator to use less expensive, commodity switches and have more control over network traffic flow than ever before.

The Benefits of SDN
With a centralized, programmable network that can automatically and dynamically address changing requirements, SDN can:
1. Reduce CapEx: reducing the need to purchase purpose-built, ASIC-based networking hardware and supporting pay-as-you-grow models to eliminate wasteful overprovisioning.
2. Reduce OpEX: enabling algorithm control of the network, through network elements that are increasingly programmable, that makes it easier to design, deploy, manage and scale networks. The ability to automate provisioning and orchestration not only reduces overall management time, but also the chance for human error to optimize service availability and reliability.
3. Deliver Agility and Flexibility: helping organizations rapidly deploy new applications, services and infrastructure to quickly meet their changing business goals and objectives.
4. Enable Innovation: enabling organizations to create new types of applications, services and business models that can create new revenue streams and more value from the network

5 reasons why software defined networking makes a difference

1. SDN Creates New Revenue Streams
SDN reduces both capital and operating expense by simplifying and automating management, avoiding over-provisioning, and reducing human error, (which is the most common cause of network configuration failures).  Further, it allows you to offer new features and functions that would be very difficult or prohibitively expensive on your current data center network.  A good example was given at the Open Ethernet Forum, when Verizon described how they plan to use SDN for better quality downloads of streaming video.  Since SDN controllers potentially have access to resources outside the network, such as the type of encoding used on a video file, they can adjust the network provisioning to accommodate a 3D high definition video vs a home movie of your cat, dynamically giving each one the appropriate amount of network resources. The result is a better viewing experience than you’d get over someone else’s network.

2) SDN Guarantees Better Quality of Service
This is a consequence of centralized, programmable management; SDN can view the entire network topology, not just the next hop as in conventional networks.  Also, today’s network treats switches and routers as if they were a “one size fits all” appliance.  It’s up to a highly skilled network administrator to translate application requirements into terms the network operating system can implement. Often these translations are approximations at best, resulting in poor utilization of network resources.  By creating the equivalent of a single operating system for the entire network, SDN changes the game, allowing us to program network configurations.  And if we can program something, we can automate it and eventually optimize it.  We can dynamically create service chains, or virtual paths through the network which interconnect firewalls, load balancers, and other functions. That’s what we mean by an application aware network.   For example, SDN adopters such as Tervela (who does global financial trading and risk analysis) and Selerity (who provides ultra low latency transaction processing) require high availability disjoint paths through their network and consistently low latency. SDN allows them to program alternate end-to-end paths in advance; if a network link fails, the recovery time is over ten times faster than conventional Ethernet.

3) SDN Provides Faster Time to Value
This is a result of SDN making updates in software, rather than hardware.  You wouldn’t virtualize your servers or storage if it meant sending a technician with a screwdriver to reconfigure circuit boards every day.  And yet, during a presentation at the 2013 OFC/NFOEC conference, a Cisco Senior VP said that it currently takes 5 days to fully bring a multi-tier workload online, including configuring network appliances, storage, and more.  SDN allows you to create, modify, and remove virtual network configurations in minutes, not days; with overlays like DOVE, you never have to touch the underlying IP switches. We can better integrate networking with servers and storage to create rapidly deployable, turnkey solutions (like IBM PureSystems).  The same holds true for interconnecting multiple data centers.  Reprovisioning the WAN currently takes days or weeks, but it’s possible to orchestrate the networks within and between data centers from a common controller, reducing this time to minutes (some of IBM’s work in this area will be published this summer, in collaboration with the New York State Center for Cloud Computing & Analytics).




4) SDN Provides Better Security
This is a bit more subtle, but makes sense when you think about it.  SDN protocols such as OpenFlow can be used as policy-based packet filters, diverting traffic from know “black lists” of suspect data sources.  SDN overlays like DOVE (an IETF industry standard, now available as part of the IBM Software Defined Network for Virtual Environments) allow you to create huge amounts of VLANs, and scale them to large networks with a network connectivity service.  Combined with virtual hypervisor switches like the IBM 5000v you can drive isolated multi-tenancy all the way back into the server hypervisor.  Further, a centralized SDN controller cluster is easier to defend than a network with thousands of switches running their own independent operating system.  SDN should make it easier to pass security compliance audits, since the entire  network policy is contained in one place.  Virtual security appliances can quickly be provisioned as waypoints on a DOVE overlay network.

5) SDN Provides an Open, Standards-Based Environment
There are many benefits from using open source Linux server operating systems.  Through the Linux Foundation, SDN is building the equivalent of Linux for the data network, with the same expected benefits.  The recently announced OpenDaylight project, the largest open source effort in history, provides an open source community to accelerate SDN adoption.  IBM is a long standing supporter of open standards, from our early efforts with Linux on the mainframe to the Open Data Center Interoperable Network (ODIN), and we’ve published extensive interop testing with other vendor’s networking products.  As a founding member of OpenDaylight, we’re pleased to bring this same approach to data center networking.  This ecosystem creates a wider variety of new features for your network faster than ever before (analogous to the app store for your smart phone).

SDN industry momentum




Programming flow architecture






*some data is taken from web