Saturday 4 July 2015

HTTP 2

HTTP 2
HTTP 2 is a replacement for how HTTP is expressed “on the wire”. It is not a ground-up rewrite of the protocol; HTTP methods, status codes and semantics are
the same, and it should be possible to use the same APIs as HTTP/1.x (possibly with some small additions) to represent the protocol. The focus of the protocol is on performance; specifically, end-user perceived latency, network and server resource usage.  One major goal is to allow the use of a single connection from browsers to a Web site. The basis of the work was SPDY, but HTTP/2 has evolved to take the community’s input into account, incorporating several improvements in the process. HTTP 2 provides an optimized transport for HTTP semantics.  HTTP 2  supports all of the core features of HTTP/1.1, but aims to be more efficient in several ways. The basic protocol unit in HTTP 2  is a frame.  Each frame type serves a different purpose. For example, HEADERS and DATA frames form the basis of HTTP requests and responses; other frame types like SETTINGS, WINDOW_UPDATE, and PUSH_PROMISE are used in support of other HTTP 2  features. Multiplexing of requests is achieved by having each HTTP request-response exchange associated with its own stream. Streams are largely independent of each  other, so a blocked or stalled request or response does not prevent progress on other streams. HTTP 2  adds a new interaction mode, whereby a server can push responses to a client.  Server push allows a server to speculatively send data to a client that the server anticipates the client will need, trading off some network usage against a potential latency gain. The server does this by synthesizing a request, which it sends as a PUSH_PROMISE frame.  The server is then able to send a response to the synthetic request on a separate stream. Because HTTP header fields used in a connection can contain large amounts of redundant data, frames that contain them are compressed.  This has especially advantageous impact upon request sizes in the common case, allowing many requests to be compressed into one packet.

Wednesday 3 June 2015

Brillo Opeating system by google



Google introduced Brillo and Weave, it’s plan to provide software for the internet of things.

Google announced its planned software for the Internet of things and it’s a pretty nice shot at all the major players trying to horn in on the space while taking advantage of Google’s dominance in the mobile operating system arena today.
Sundar Pichai, Google’s senior vice president of Chrome and App, said the company developed Brillo, a stripped down version of Android that will run on battery-powered connected devices and Weave, a communications standard that will let developers build programs that allow these connected devices to communication.
Brillo will support Wi-Fi, Bluetooth, and because it was developed with some input from Nest, although it is not part of the Nest business, Brillo developers at Google may support alternative wireless radio protocols such as Thread. Pichai said the software was named after the scouring pad, because Brillo was a scrubbed down version of the Android operating system for devices with smaller computing and memory footprints.
This is not a new approach to the Internet of things. What Google is doing is building an operating system that device manufacturers can put on their devices to ease the process of getting a device online, manage the connectivity and many of the lower-level hardware functions that manufacturers don’t want to deal with. The other part of Google’s Internet of things strategy is the inclusion of a communications standard called Weave, which will define certain devices and what they can do. So for example, a camera can be turned on or off. Pichai didn’t go into a lot of detail about Weave. He did say that Weave is cross-platform, and it exposes developer application programming interfaces, which is a plus for people trying to link their cloud-based services to devices communicating with Weave.
Weave is not a separate protocol, but rather a lightweight schema developers can use. In function it reminds me of what the All Seen Alliance is pushing with AllJoyn and the Open Internet Consortium is trying to do with Iotivity. However, both of those are protocols and it’s not yet clear how all three would compare and contrast for developers.
Pichai also noted that any device running Brillo and Weave will be able to talk to other Android devices, which means that when these are fully implemented the scenario should look similar to what Apple is trying to do with HomeKit—only Google was careful to keep the scope of its efforts at a larger scale. Pichai mentioned the smart home, but also farmers and other use cases. This would give manufacturers of connected devices a reason to use Brillo and Weave over alternatives, because there’s an embedded base of devices that already would talk to them and it makes it much easier to build services that could tie all of the myriad devices together.
Brillo will be available in the third quarter of the year, while Weave will be available in the fourth quarter in its full entirety. Pichai said we can expect bits of Weave information to come out before then.

Friday 27 February 2015

Microsoft hololens

The device for Windows Holographic, Microsoft HoloLens is a smart glasses unit that is a cordless, self-contained Windows 10 computer. It uses advanced sensors, a high-definition 3D optical head-mounted display, and spatial sound to allow for augmented reality applications, with a natural user interface that the user interacts with through gaze, voice, and hand gestures. Codenamed "Project Baraboo," HoloLens had been in development for five years before its announcement in 2015, but was conceived earlier as the original pitch made in late 2007 for what would become the Kinect technology platform.
Applications showcased for Microsoft HoloLens include HoloStudio, a 3D modelling application which can produce output for 3D printers; Holobuilder, a demonstration inspired by the video game Minecraft; an implementation of the Skype telecommunications application; and OnSight, a software tool developed in collaboration with NASA's Jet Propulsion Laboratory (JPL).OnSight integrates data from the Curiosity rover into a 3D simulation of the Martian environment, which scientists around the world can visualize, interact with, and collaborate in together using HoloLens devices. OnSight can be used in mission planning, with users able to program rover activities by looking at a target within the simulation, and using gestures to pull up and select menu commands. JPL plans to deploy OnSight in Curiosity mission operations, using it to control rover activities by July 2015.
Among the sensor types used in HoloLens is an energy-efficient depth camera with a 120°×120° field of view.Other capabilities provided by the sensors include head-tracking, video capture, and sound capture. In addition to a high-end CPU and GPU, HoloLens features a Holographic Processing Unit (HPU), a coprocessor which integrates data from the various sensors, and handles tasks such as spatial mapping, gesture recognition, and voice and speech recognition.
Microsoft expects HoloLens to be made available "in the Windows 10 timeframe" and priced for use in both the enterprise and consumer markets.