Saturday, January 25, 2014

Challenge of Distributed System

As we think about large-scale web applications, we need storage backends that scale and support concurrency. By scalability, we aim for increasable data capacity and growing read/write throughput of a high degree. The application servers in our model handle huge numbers of requests in parallel. As they, in turn, rely on the backend storage systems, we need to cope with highly concurrent access at this level as well.

Throughput and capacity increases can only be sustainably achieved by employing a horizontal scale mechanism. A single database server would only be able to scale up to a certain load, even with specialized hardware equipment. As a consequence, we need a concurrent and distributed system for scalable backend storage.

The backend storage must persistently hold the application state, thus we are also expecting some kind of data consistency when reading/writing data in our application. We are dealing with a distributed system, so we must expect failures in advance. In case of a failure of a single node, we still want the overall storage system to operate seamlessly and maintain its availability. Likewise, we are executing storage operations as part of web requests, thus we demand low latency of operations.

These general requirements lead us to an important barrier of distributed systems, Brewer's theorem on the correlation of consistency, availability and partition tolerance.


Consistency

The consistency property describes a consistent view of data on all nodes of the distributed system. That is, the system assures that operations have an atomic characteristic and changes are disseminated simultaneously to all nodes, yielding the same results. Availability

This property demands the system to eventually answer every request, even in case of failures. This must be true for both read and write operations. In practice, this property is often narrowed down to bounded responses in reasonable time. However, Gilbert and Lynch have confirmed the theorem even for unbounded, eventual responses. Partition tolerance

This property describes the fact that the system is resilient to message losses between nodes. A partition is an arbitrary split between nodes of a system, resulting in complete message loss in between. This affects the prior properties. Mechanisms for maintaining consistency must cope with messages losses. And according to the availability property, every node of any potential partition must be able to respond to a request.


The core statement of Brewer's theorem now goes as follows:


"You can have at most two of these properties for any shared-data system."


We have seen that all properties are desirable. But any real system must trade off the properties and dismiss at least one of them, as shown in figure So we have three distinct combinations with significantly different characteristics.

Figure  Different properties that a distributed system can guarantee at the same time, according to Brewer's theorem 

Consistency & Availability (CA)

The group of distributed systems following this model provides a consistent and available service, but does not tolerate partitions. In case of a partition, these systems may become inconsistent, as we will see soon. The combination is also known as high-availability consistency.

Contenders following this approach include most of the traditional relational database management systems. Replication is an important mechanism for achieving highly available consistency and transaction protocols such as the two-phase commit protocol are applied to ensure consistency. The separation into partitions may lead to so-called "split brain" scenarios, in which different partitions create conflicting replicas as a result of isolation.

Recovering from such scenarios would require some kind of consensus protocol. This in turn would disallow nodes to service requests unless a consensus is available. We would thus convert our CA approach into a CP approach at the sacrifice of availability. The shortcomings of coping with network errors renders the CA approach less suitable for larger distributed database systems.
Consistency & Partition Tolerance (CP)

Distributed systems at the intersection of consistency and partition tolerance provide a strongly consistent service. Consistency is guaranteed even in the presence of a partition. However, nodes of a partition may not be able to respond to requests as long as they have not yet agreed with other nodes that may be temporarily unreachable. As a result, availability cannot be always provided. This combination of properties is also known as enforced consistency.

In practice, this approach is important when consistency must be enforced at any costs, and the system is still inherently distributed by design. That is for instance a banking application, where the balance of all accounts is a primary constraint. Database systems implementing this model are also often based on relational database systems. Supporting consistent states even in case of network errors requires the usage of sophisticated algorithms for quorum and majority decisions. Such a protocol for solving consensus is the Paxos protocol [Lam98].
Availability & Partition Tolerance (AP)

Distributed systems that are always available and tolerate partitions maintain their service even when partitions occur. However, a reachable node may then hold (temporarily) inconsistent state. Thus, this intersection of properties is also known as eventual consistency.


In terms of data and state, the sacrifice of strong consistency guarantees appears to be questionable at first glance. However, many applications can indeed tolerate deferred consistency properties when favoring availability at all costs. In this case, it is important to keep in mind potential issues due to eventually consistent data on application level during development. Well known examples following this approach are the DNS or web caches. Stale data (e.g. host mappings resp. cached responses) are acceptable for a while, but eventually the latest version of the data disseminates and purges older entries. As eventual consistency entails the possibility of data conflicts, appropriate resolution mechanisms must be employed. Often, conflict resolution is also part of the application logic, and not only provided by the database system. In general, conflicts are resolved either on read or write operations, or asynchronously in the background, decoupled from client operations.

Tuesday, January 14, 2014

VP9 Video Codec

Video data accounts for a significant proportion of all internet traffic, and the trend is toward higher quality, larger format and often professionally produced video, encoded at higher data rates and Supported by the improved provisioning of high bandwidth internet connections. VP9 is being developed as an open source solution tailored to the specific characteristics of the internet, under the auspices of the WebM project, with the aim of providing the highest quality user experience and the ability to support the widest range of use-cases on a diverse set of target devices. This document provides a high-level technical overview of the coding tools that will likely be included in the final VP9 bitstream. A large proportion of the advance that VP9 has made over VP8 can be attributed to a straightforward generational progression from the current to the future, driven by the need for the greater efficiency required to handle a new coding "sweet-spot" that has evolved to support the provisioning of larger frame size, higher quality video formats.
VP9 has many design improvements compared to VP8. VP9 will support the use of superblocks. A quadtree coding structure will be used with the superblocks.
Google put main focus on upgrading following known issues in vp8 compared to arch rival h.264

Tiled Encoding for Higher resolution
1. Tiled encoding of new content in YouTube. (Recode of older content Q1).
2. (480p, 720p 2 tiles, 1080p 4 tiles, 4K 8 tiles: (360p 2 tiles coming in the Q1 2014))
New prediction modes
A large part of the coding efficiency improvements achieved by VP9 can be attributed to the introduction of larger prediction block sizes. Specifically, VP9 introduces the notion of Super-Blocks of size up to 64x64 and their quad-tree like decomposition all the way down to a block size of 4x4, with some quirks as described below. In particular, a superblock of size 64x64 (SB64) could be split into 4 superblocks of size 32x32 (SB32), each of which can be further split into 16x16 macro blocks (MB). Each SB64, SB32 or MB could be predicted as a whole using a conveyed INTRA prediction mode, or an INTER prediction mode with up to two motion vectors and corresponding reference frames, as described in Section 3.2. A macro block can be further split using one of three mode families: B_PRED - where each 4x4 sub-block within the MB can be coded using a signaled 4x4 INTRA prediction mode; I8X8_PRED - where each 8x8 block within the MB can be coded using a signaled 8x8 INTRA prediction mode; and SPLITMV - where each 4x4 sub-block within the MB is coded in INTER mode with a corresponding motion vector, but with the option of grouping common motion vectors over 16x8, 8x16, or 8x8 partitions within the MB. Note that the B_PRED and SPLITMV modes in VP9 work in the same way as they do in VP8. Experiments with 1/8 pel and sub-pixel motion estimation
In short following prediction modes were applicable
1. Intra modes
2. Inter mode
3. Compound INTER-INTRA Mode
Sub-Pixel Interpolation
The filters used for sub-pixel interpolation of fractional motion are critical to the performance of a video codec.  The maximum motion vector precision supported is 1/8-pixel, with the option of switching between 1/4-pixel and 1/8-pixel precision using a frame level flag. If 1/8-pixel precision is used in the frame, however, it is only used for small motion, depending on the magnitude of the reference motion vector.  For larger motion - indicated by a larger reference - there is almost always motion blur which obviates the need for higher precision interpolation.  VP9 defines a family of three 8-tap filters, selectable at either the frame or macroblock level in the bit stream:
8-tap Regular: An 8-tap Lagrangian interpolation filter designed using the int_filt function in MATLAB, 
8-tap Sharp: A DCT-based interpolation filter with a sharper response, used mostly around sharper edges,
Simple concept is :
Sub-pel precision and filters can have a big impact on compression efficiency.... BUT
Longer filters -> Better interpolation (but more cost)
Higher precision sup pel -> Better interpolation (but more cost)
Entropy coding
Much more adaptive to extremes and better behaved for large image formats. Coding of complex uncorrelated motion and very dense residual signals is an area where we need to improve.
  • New predictive models and contexts.
  • More efficient updates / adaptation.
  • Segment level and SB level adjustments
  • Reference frame contextual coding
  • Expanding the previous coef-contexts
  • Modifications to coding of explicit segment map (differential coding option and contexts)
  • Separate coding context for different frame types
Next Steps:   Major overhaul of how motion is coded
New Transforms Modes
VP9 supports the Discrete Cosine Transform (DCTs) at sizes 4x4, 8x8,   16x16 and 32x32 and Super block 64x64(under development) and removes the second-order transform that was employed in VP8.  Only transform sizes equal to, or smaller than, the   prediction block size may be specified.  Modes B_PRED and 4x4 SPLITMV
are thus restricted to using only the 4x4 transform; modes I8X8_PRED    and non-4x4 SPLITMV can use either the 4x4 or 8x8 transform; full- size (16x16) macroblock predictors can be coupled with either the   4x4, 8x8 or 16x16 transforms, and superblocks can use any transform
Size up to 32x32.  Further restrictions on the available sub-set of transforms can be signaled at the frame-level, by specifying a maximum allowable transform size, or at the macroblock level by explicitly signaling which of the available transform sizes is used. In addition, VP9 introduces support for a new transform type, the Asymmetric Discrete Sine Transform (ADST), which can be used in combination with specific intra-prediction modes
 
Segmentation:
Vp9 offers an optional feature known as segmentation. When enabled, the bitstream codes a segment ID for each block, which is a number between 0 and 7. Each of these eight segments can have any of the following four features enabled:
Skip – blocks belonging to a segment with the skip feature active are automatically assumed to not have a residual signal. Useful for static background.
Alternate quantizer – blocks belonging to a segment with the AltQ feature may use a different inverse quantization scale factor. Useful for regions that require more (or less) detail than the rest of the picture. Or it could be used for rate control
Ref – blocks belonging to a segment that have the Ref feature enabled are assumed to point to a particular reference frame (Last, Golden, or AltRef), as opposed to the bitstream explicitly transmitting the reference frame as usual.
AltLf - blocks belonging to a segment that have the AltLf feature enabled use a different smoothing strength when getting loop-filtered. This can be useful for smooth areas that would otherwise appear too blocky. They can get more smoothing without having to smoother the entire picture more.

Loop Filter:Like AVC and HEVC, VP9 specifies a loop filter that is applied to the whole picture after it has been decoded. It attempts to clean up the blocky artifacts that can occur. The loop filter operates per super block, filtering first the vertical edges, then the horizontal edges of each superblock. The super blocks are processed in raster order, regardless of any tile structure. This is unlike HEVC, where all vertical edges of the frame are filtered before any horizontal edges. There are 4 different filters:
16-wide, 8 pixels on each side of the edge
8-wide, 4 pixels on each side of the edge
4-wide, 2 pixels on each side of the edge
2-wide, 1 pixel on each side of the edge

Scale better for larger images
Comparison to existing codecs
Performance comparison of H.264/MPEG-AVC and H.265/MPEG-HEVC (High-Efficiency Video Coding) as well as the recently published proprietary video coding scheme VP9. Below paper talks about the current status and updates       http://iphome.hhi.de/marpe/download/Performance_HEVC_VP9_X264_PCS_2013_preprint.pdf

Industry coverage

Who is supporting what?
H.265 versus VP9 is a little like HDMI versus DisplayPort in that the latter’s royalty free approach should give it the edge, but the former’s ubiquitous legacy means it has widespread industry support. Previously this made H.264 an easy winner over VP8.  This time around things are closer. Google used CES 2014 to show VP9 has support from LG, Panasonic, Sony, Samsung, Toshiba, Philips, Sharp, ARM, Intel, Nvidia, Qualcomm, Realtek Semiconductor and Mozilla. As mentioned, Google has also built VP9 support into its Chrome browser and YouTube.
The flip side is all these companies have also backed H.265 and even Google will support it in Chrome and hasn’t ruled out YouTube support. In fact, this led to an amusing quote from Francisco Varela, YouTube global head of platform partnerships, that "We are not announcing that we will not support HEVC." Consequently most companies look like they will support both formats, much like you’d be hard pressed to find an audio player that doesn’t support both MP3 and AAC.

Do we need to worry about format support?
With the decline of physical media and the rise of 4K Ultra HD, there has never been greater pressure on new video compression standards to deliver. Thankfully both do, if in slightly different ways, and – unlike past format wars – there is likely to be space for both as the industry seems reluctant to wholly commit to a) a future paying license fees, or b) being beholden to Google. That means it's very likely that most devices you buy will support both, which is great news for everyone.

Saturday, January 4, 2014

5 reasons to start using C++11

Get performance benefits
#1
The hash tables which have now become standard provide faster insertion, deletion and lookup than their ordered counterparts, which can be very useful when handling large amounts of data. You now have unordered_map, unordered_multimap, unordered_set and unordered_multiset at your disposal.

#2.
With the use of type traits (e.g. is_floating_point) and template metaprogramming (e.g. the enable_if template), you can specialize your templates for types with particular characteristics and thus implement optimizations.

#3.
Lambda expressions provide a way to define anonymous function objects (which are actually closures) right where they are used, thus making the code more linear and easier to follow. This is very convenient in combination with STL algorithms:
bool is_fuel_level_safe()
{
    return all_of(_tanks.begin(), _tanks.end(),
        [this](Tank& t) { return t.fuel_level() > _min_fuel_level; });
}
also The new smart pointers which have replaced the problematic auto_ptr allow you to stop worrying about memory cleanup, and to remove the cleanup code. It’s good for clarity, and for avoiding memory leaks and the associated time spent to hunt them down.

#4.
To explain the concept very briefly, it’s a way to optimize copying. Sometimes copying is obviously wasteful. If you’re copying from a temporary string object, simply copying the pointer to the character buffer would be much more efficient than creating a new buffer and copying the character. It would work because the source object is about to go out of scope.
However, previously there was no mechanism in C++ to figure out whether the source object is a temporary or not. Move semantics provide this mechanism by allowing you to have a move constructor and a move assignment operator in addition to the copy operations.
Did you know that if you’re using classes from the standard library like string or vector in Visual Studio 2010, they already support move semantics? This helps prevent unnecessary copying and therefore improve performance.
By implementing move semantics in your own classes you can get additional performance improvements, for example when you store them in STL containers. Also, keep in mind that move semantics can be applied not only to constructors, but also to methods (such as vector‘s push_back).

#5.
Having functions as first class objects is a very powerful feature that allows your code to be flexible and generic. C++11 makes a step in this direction with std::function. function provides a way to wrap and pass around anything callable – function pointers, functors, lambdas and more.