Pages

Friday 3 June 2016

Top 10 strategic technology trends for 2015


As mobile devices continue to proliferate an increased emphasis on serving the needs of the mobile user in diverse contexts and environments, as opposed to focusing on devices alone.

The top 10 strategic technology trends for 2015 are:

Computing Everywhere
As mobile devices continue to proliferate an increased emphasis on serving the needs of the mobile user in diverse contexts and environments, as opposed to focusing on devices alone.

"Phones and wearable devices are now part of an expanded computing environment that includes such things as consumer electronics and connected screens in the workplace and public space," said David Cearley, vice president & Gartner Fellow. "Increasingly, it's the overall environment that will need to adapt to the requirements of the mobile user. This will continue to raise significant management challenges for IT organisations as they lose control of user endpoint devices. It will also require increased attention to user experience design."

The Internet of Things
The combination of data streams and services created by digitizing everything creates four basic usage models Manage, Monetize, Operate and Extend. These four basic models can be applied to any of the four "Internets." Enterprises should not limit themselves to thinking that only the Internet of Things (IoT) (assets and machines) has the potential to leverage these four models. For example, the pay-per-use model can be applied to assets (such as industrial equipment), services (such as pay-as-you-drive insurance), people (such as movers), places (such as parking spots) and systems (such as cloud services). Enterprises from all industries can leverage these four models.

3D Printing
Worldwide shipments of 3D printers are expected to grow 98 percent in 2015, followed by a doubling of unit shipments in 2016. 3D printing will reach a tipping point over the next three years as the market for relatively low-cost 3D printing devices continues to grow rapidly and industrial use expands significantly. New industrial, biomedical and consumer applications will continue to demonstrate that 3D printing is a real, viable and cost-effective means to reduce costs through improved designs, streamlined prototyping and short-run manufacturing.

Advanced, Pervasive and Invisible Analytics
Analytics will take center stage as the volume of data generated by embedded systems increases and vast pools of structured and unstructured data inside and outside the enterprise are analyzed. "Every app now needs to be an analytic app," said Cearley. "Organisations need to manage how best to filter the huge amounts of data coming from the IoT, social media and wearable devices, and then deliver exactly the right information to the right person, at the right time. Analytics will become deeply, but invisibly embedded everywhere." Big data remains an important enabler for this trend but the focus needs to shift to thinking about big questions and big answers first and big data second the value is in the answers, not the data.

Context-Rich Systems
Ubiquitous embedded intelligence combined with pervasive analytics will drive the development of systems that are alert to their surroundings and able to respond appropriately. Context-aware security is an early application of this new capability, but others will emerge. By understanding the context of a user request, applications can not only adjust their security response but also adjust how information is delivered to the user, greatly simplifying an increasingly complex computing world.

Smart Machines
Deep analytics applied to an understanding of context provide the preconditions for a world of smart machines. This foundation combines with advanced algorithms that allow systems to understand their environment, learn for themselves, and act autonomously. Prototype autonomous vehicles, advanced robots, virtual personal assistants and smart advisors already exist and will evolve rapidly, ushering in a new age of machine helpers. The smart machine era will be the most disruptive in the history of IT.

Cloud Computing
The convergence of cloud and mobile computing will continue to promote the growth of centrally coordinated applications that can be delivered to any device. "Cloud is the new style of elastically scalable, self-service computing, and both internal applications and external applications will be built on this new style," said Cearley. "While network and bandwidth costs may continue to favor apps that use the intelligence and storage of the client device effectively, coordination and management will be based in the cloud."

In the near term, the focus for cloud/client will be on synchronizing content and application state across multiple devices and addressing application portability across devices. Over time, applications will evolve to support simultaneous use of multiple devices. The second-screen phenomenon today focuses on coordinating television viewing with use of a mobile device. In the future, games and enterprise applications alike will use multiple screens and exploit wearables and other devices to deliver an enhanced experience.

Software-Defined Applications and Infrastructure
Agile programming of everything from applications to basic infrastructure is essential to enable organisations to deliver the flexibility required to make the digital business work. Software-defined networking, storage, data centers and security are maturing. Cloud services are software-configurable through API calls, and applications, too, increasingly have rich APIs to access their function and content programmatically. To deal with the rapidly changing demands of digital business and scale systems up or down rapidly, computing has to move away from static to dynamic models. Rules, models and code that can dynamically assemble and configure all of the elements needed from the network through the application are needed.

Web-Scale IT
Web-scale IT is a pattern of global-class computing that delivers the capabilities of large cloud service providers within an enterprise IT setting. More organisations will begin thinking, acting and building applications and infrastructure like Web giants such as Amazon, Google and Facebook. Web-scale IT does not happen immediately, but will evolve over time as commercial hardware platforms embrace the new models and cloud-optimized and software-defined approaches reach mainstream. The first step toward the Web-scale IT future for many organizations should be DevOps bringing development and operations together in a coordinated way to drive rapid, continuous incremental development of applications and services.

Risk-Based Security and Self-Protection
All roads to the digital future lead through security. However, in a digital business world, security cannot be a roadblock that stops all progress. Organisations will increasingly recognize that it is not possible to provide a 100 percent secured environment. Once organizations acknowledge that, they can begin to apply more-sophisticated risk assessment and mitigation tools. On the technical side, recognition that perimeter defense is inadequate and applications need to take a more active role in security gives rise to a new multifaceted approach. Security-aware application design, dynamic and static application security testing, and runtime application self-protection combined with active context-aware and adaptive access controls are all needed in today's dangerous digital world. This will lead to new models of building security directly into applications. Perimeters and firewalls are no longer enough; every app needs to be self-aware and self-protecting.

Wednesday 14 October 2015

How to Encrypt Data on External Drives

It's not hard to lose a USB flash drive; it's even easier to steal one. If you're the victim of such a theft, panic is understandable. There could be work documents, private pictures, your kid's birthday party video, or amazing notes for a NaNoWriMo novel—anything—on that drive. It's unlikely to be the only copy—this is the age of online backup and sync, after all. But if you're crazy enough to trust your most important, irreplaceable data to a device that's even easier to misplace or forget than your keys, at least make sure that data is secure.

What you'll need is software for encrypting the data, and that software has to be portable, in that it runs on any PC without installation, since it will likely run from the flash drive itself. Note, these solutions also work with any external hard drive, for the most part, plus your much-harder-to-steal internal hard disk drives (HDDs) and solid-state drives (SSDs).

Encryption Software

The first choice should always be to try a free software solution. A current favorite these days is VeraCrypt. It's free, open-source, and Windows-only. It lets you create a volume/vault on your USB flash drive that only you can access, or encrypt an existing drive (as long as it isn't system necessary, like your C: drive), or optionally, encrypt the entire system drive so anyone who tries to install programs or read/write files would need to enter a password each time. That last one is overkill; stick to the first few options.
VeraCrypt install
The volumes created by VeraCrypt can be standard—they're visible but only the person with the password can get access—or hidden. With the latter, even if you're forced to give up the password, it's unlikely anyone can find your data to get access anyway.

When you go to install VeraCrypt, there's an option to Extract. Do that and extract the files to your USB Drive. That makes a portable version, so you don't need to have VeraCrypt on every system that you'll plug the drive into—but it does have to be on an administror-level log-in on the PC.

The VeraCrypt site has an excellent step-by-step tutorial. Another free option is CipherShed; both are off-shoots of the late, great TrueCrypt. BitLocker, which comes with select versions of Windows (the non-"Home" versions), can also be used to secure USB or external drives. If you prefer to pay, check out the $12.99 EncryptStick, which comes for Mac and Windows.

Secure Flash Drives

There are millions of USB flash drives around—I have three of various capacities littering my desk at the moment. So using third-party software to secure their contents makes great sense. But if you want security from the start, there are plenty of drives that come with security built right into the hardware.

A few select flash drives have a number pad right on the drive itself. You enter a PIN code before you can access the contents. They include the Aegis Secure Key 3.0, a $65 flash drive at 4GB with FIPS 140-2 Level 3 encryption (pictured right; it also comes in 8, 16, and 32GB versions).

If you think reaching for the number pad is an issue, there's also a few biometric USB flash drives. IronKey, by Imation, is a pretty well-known name for secure drives; it's F200 with built-in finger-swipe (pictured above) and throws in multi-factor authentication for your files. The price, of course, is much higher, with the base model with 8GB starting at $189 direct and shooting to a price of $649 for 64GB! It gets good marks for security, but most reviews also say its performance is lacking.

But you don't need to have anything fancy built into the hardware of your USB flash drive to be secure. Several models come with encryption software. It's held in a partition of the drive itself and looks to Windows like a CD, so it can auto-play activate when inserted, giving you instant access. Some options include the Kanguru Defender 2000 (4GB for $69); IronKey F150 (8GB for $139), Kingston DataTraveler Vauilt Privacy 3.0 (4GB for $35), and several more. All of these listed are base models; you can always get more capacity by paying more. For savings, be sure to compare on Google or using Amazon.

Friday 9 October 2015

IPFS (InterPlanetary File System): Why We Must Distribute The Web

IPFS isn’t exactly a well-known technology yet, even among many in the Valley, but it’s quickly spreading by word of mouth among folks in the open-source community. Many are excited by its potential to greatly improve file transfer and streaming speeds across the Internet.

From my personal perspective, however, it’s actually much more important than that. IPFS eliminates the need for websites to have a central origin server, making it perhaps our best chance to entirely re-architect the Internet — before its own internal contradictions unravel it from within.

How, and why? The answer requires a bit of background.

Why We Have A Slow, Fragile And Forgetful Web

IPFS is a new peer-to-peer hypermedia protocol that aims to supplement, or possibly even replace, the Hypertext Transfer Protocol that rules the web now. Here’s the problem with HTTP: When you go to a website today, your browser has to be directly connected to the computers that are serving that website, even if their servers are far away and the transfer process eats up a lot of bandwidth.

Data providers get charged because each network has a peering agreement, while each network hop costs money to the data provider and wastes bandwidth. Worse, HTTP downloads a file from a single computer at a time, instead of getting pieces from multiple computers simultaneously.

Consequently, we have what we’re stuck with now: a slow, expensive Internet, made even more costly by predatory last-mile carriers (in the U.S. at least), and the accelerating growth of connection requests from mobile devices. It’s not just slow and expensive, it’s unreliable. If one link in an HTTP transfer cuts out for whatever reason, the whole transfer breaks. (Whenever a web page or media file is slow to load, a problem with a link in the HTTP chain is among the likeliest culprits.)

How it works
IPFS is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository. In other words, IPFS provides a high throughput content-addressed block storage model, with content-addressed hyperlinks. This forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems, blockchains, and even a Permanent Web. IPFS combines a distributed hashtable, an incentivized block exchange, and a self-certifying namespace. IPFS has no single point of failure, and nodes do not need to trust each other.

Remaking The Internet With IPFS
The InterPlanetary File System — a tribute to J.C.R. Licklider’s vision for an “intergalactic” Internet — is the brainchild of Juan Benet, who moved to the U.S. from Mexico as a teen, earned a computer science degree at Stanford, started a company acquired by Yahoo! in 2013 and, last year at Y Combinator, founded Protocol Labs, which now drives the IPFS project and its modest aim of replacing protocols that have seemed like facts of life for the last 20 years.

As a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files, IPFS seeks to improve on HTTP in several ways. Two, Juan told me in a recent conversation, are key:

“We use content-addressing so content can be decoupled from origin servers, and instead, can be stored permanently. This means content can be stored and served very close to the user, perhaps even from a computer in the same room. Content-addressing allows us to verify the data too, because other hosts may be untrusted. And once the user’s device has the content, it can be cached indefinitely.”

IPFS also addresses security problems that plague our HTTP-based Internet: Content-addressing and content-signing protect IPFS-based sites, making DDoS attacks impossible. And to help mitigate the damage of discontinued websites, IPFS also archives important public-record content, and can easily store important, public-record content.

IPFS’s final core improvement is decentralized distribution, which makes it possible to access Internet content despite sporadic Internet service or even while offline: “We make websites and web apps have no central origin server,” Juan explained. “They can be distributed just like the Bitcoin network is distributed.” This is actually something that HTTP simply cannot do, and would especially be a boon to networks without top-notch connectivity (i.e., the whole developing world), and for access outside of metropolitan areas.

Released in Alpha last February, IPFS has already started to see a lot of experimentation among early adopters. On September 8, for instance, Neocities became the first major site to implement IPFS, following a call from the Internet Archive for a distributed web. We currently suffer a constant loss of websites as their owners abandon them over the years — a growing crisis to our collective Internet memory — and this is a small but important step toward a more permanent web.

But will websites owned by large corporations follow Neocities’ lead, adopting such an as-yet-untested protocol — especially when the mere mention of “peer to peer” often terrifies them? That takes me to my final point.

Why IPFS Matters For The Future Of Internet Business

As I explain in my upcoming book, we are fast approaching a point where the cost of delivering content will outstrip the benefits — and profits. The major Internet companies are already struggling to stay ahead of our content demands, with armies of engineers at companies like Akamai, Google and Amazon devoted to this one problem.

And they haven’t even seen the worst of it: Thanks to rapid adoption of low-cost smartphones, whole continents of consumers will go online in the coming decade. The Internet of Things promises to only compound this challenge, as billions of devices add their own demands on our rapidly dwindling connectivity.

We are already in desperate need for a hedge against what I call micro-singularities, in which a viral event can suddenly transfix billions of Internet users, threatening to choke the entire system in the process. (A potentially life-threatening outage, when the micro-singularity involves a natural disaster or other emergency.)

Netflix recently started researching large-scale peer-to-peer technology for streaming, an early, hopeful sign that companies of its size and reach are looking for smarter content distribution methods. Netflix, YouTube, all the bandwidth-heavy services we cherish now would thrive on an Internet remade by IPFS, dramatically reducing the cost and time to serve content.

Beyond improved service, IPFS would help the Internet grow into the system we’ve always aspired it to be at our most idealistic, but cannot become with our current protocols: Truly capable of connecting everyone around the world (even offline) to a permanent but constantly evolving expression of who we are, and aspire to be.

Tuesday 29 September 2015

Google's new phones launched be called the Nexus 5X and Nexus 6P

For the first time since their launch in 2010, Google has unveiled two Nexus smartphones at the same time, one featuring a 5.2-inch display and the other sporting a 5.7-inch screen.
Here is the link of Google Press Event 29/09/2015

At an event in San Francisco, Google announced that it has partnered with Huawei and LG to make the new Nexus 5X and Nexus 6P smartphones, respectively. The former will cost $379, whereas the latter will be available for $499; both will be up for preorder starting next today in the US and next week in other markets. There was no word on the India launch of the two smartphones.

The all-new Nexus 6P has a 5.7-inch AMOLED display with QHD (1440x2560p) resolution and comes with the integrated Nexus Imprint fingerprint sensor on the back. The smartphone is reportedly powered by Snapdragon 810 processor with 3GB RAM and comes in three storage variants: 32, 64 and 128GB; there is no support for storage expansion.

The smartphone has a 12.3MP rear camera (1.55micron sensor which captures more light, thus delivering better lowlight photos) that supports 4K recording and can capture videos at 240 frames per second; it is backed by a dual-LED flash. On the front is an 8MP camera, the highest resolution for selfie camera for a Nexus smartphone yet. The smartphone features an all-metal, 7.33mm thick body in white, grey and aluminium finishes, packs a 3,450mAh battery, and comes with front-facing stereo speakers.

Nexus 5X has a 5.2-inch screen with relatively lower Full HD (1080x1920p) resolution and runs on the 64-bit Snapdragon 808 processor with 2GB RAM. The storage capacities of the smartphone are 16GB and 32GB, with no microSD card support.
The 12.3MP camera of Nexus 5X also has a 1.55micron sensor and is backed by a laser autofocus for faster capturing; it can shoot videos in high-resolution 4K videos as well 120fps videos. Google said the smartphone will come with a Smart Burst feature that captures shots at 30fps, and can be used to create GIF images. Available in black, white and blue colours, the smartphone has a 2,700mAh battery.

Both smartphones sport the reversible Type C USB and supports fast-charging; the Nexus 6P charges twice as fast as iPhone 6S Plus, Google said. Android Marshmallow will be preloaded on both Nexus models.

Talking about the fingerprint sensor, Google said it will be built right into Android; named Nexus Imprint, it can recognize fingerprints in less than 600 milliseconds. This means that it can not only unlock the handset, but also works with other apps; it also authenticates app downloads in Play Store and payments in Android Pay.
The new Nexus 5X and Nexus 6P smartphones will have an integrated Android Sensor Hub that Google said will track "sensor fusion, activity recognition, gesture recognition, movements and low power times." A use-case Google spoke was that Android Sensor Hub will track when you pick up the handset and turn on the ambient display automatically.

On the software side, Google said it has reduced the number of preloaded apps, and has made uninstalling apps easier as well.
Nexus 5X and Nexus 6P are the first smartphones to get Android's new Google Now on Tap feature, which delivers information to users based on what they are using on screen.

About Android 6.0 (Marshmallow), Google said it will support voice interactions even when the screen is turned off and improves the battery life. On Nexus 5 and Nexus 6, the battery life increases by as much as 30% with Android Marshmallow.
Google said it will release the Android 6.0 (Marshmallow) over-the-air update for older smartphones like Nexus 5 and Nexus 6 next week. Other manufacturers will announce their plans for Marshmallow update separately.

Google Bringing Free Wi-Fi to Train Stations in India

Google today announced that it will provide high-speed public Wi-Fi in 400 train stations across India.

The news comes as Prime Minister Narendra Modi visits Google's Mountain View headquarters to champion his Digital India initiative, Google CEO Sundar Pichai wrote in a blog post.

In collaboration with Indian Railways—operator of one of the world's largest railway networks—and RailTel—provider of Internet service RailWire via its fiber network—Google's Access and Energy team aims to bring the first stations online "in the coming months" and 100 by the end of 2016, according to Pichai.

"Even with just the first 100 stations online, this project will make Wi-Fi available for the more than 10 million people who pass through every day," Pichai wrote. "This will rank it as the largest public Wi-Fi project in India, and among the largest in the world, by number of potential users."

Pichai promised "many times faster" access than existing connections in India, allowing travelers to stream a high-definition video, research their destination, or download a book or game while they wait for the next train.

"Best of all, the service will be free to start, with the long-term goal of making it self-sustainable to allow for expansion to more stations and other places in the future," Pichai said.

Why India? Even though "there are now more Internet users in India than in every country in the world aside from China...there are still nearly one billion people in India who aren't online," Pichai said.

India is one of the countries where Google is rolling out Android One, which provides low-cost Android phones to people in developing countries. "To help address the challenges of limited bandwidth, we recently launched a feature that makes mobile webpages load faster and with less data, and we've made YouTube available offline with offline Maps coming soon," Pichai said today.

Non-English-speaking Indians, meanwhile, can tap into the Indian Language Internet Alliance, which offers Hindi Voice Search, an improved Hindi keyboard, and support for seven local languages.

"Just like I did years ago, thousands of young Indians walk through Chennai Central every day, eager to learn, to explore and to seek opportunity," Pichai wrote. "It's my hope that this Wi-Fi project will make all these things a little easier."

The news comes as Facebook chief Mark Zuckerberg was at the United Nations last week calling for universal Internet access by 2020. Facebook is also offering low-cost devices to developing countries via Internet.org (now known as Free Basics). But the effort drew net neutrality complaints in India for offering free access to only a select few apps. Facebook later opened up the program to anyone who could develop low-bandwidth apps.

SOURCE:
http://googleblog.blogspot.in/2015/09/bringing-the-internet-to-more-indians.html
http://in.pcmag.com/

Sunday 27 September 2015

What's the Difference between SSD and HDD?


Until recently, PC buyers had very little choice for what kind of file storage they got with their laptop, ultrabook, or desktop. If you bought an ultrabook or ultraportable, you likely had a solid-state drive (SSD) as the primary drive (C: on Windows, Macintosh HD on a Mac). Every other desktop or laptop form factor had a hard disk drive (HDD). Now, you can configure your system with either an HDD, SSD, or in some cases both. But how do you choose? We explain the differences between SSDs and HDDs, and walk you through the advantages and disadvantage of both to help you come to your decision.

HDD and SSD Explained
The traditional spinning hard drive (HDD) is the basic nonvolatile storage on a computer. That is, it doesn't "go away" like the data on the system memory when you turn the system off. Hard drives are essentially metal platters with a magnetic coating. That coating stores your data, whether that data consists of weather reports from the last century, a high-definition copy of the Star Wars trilogy, or your digital music collection. A read/write head on an arm accesses the data while the platters are spinning in a hard drive enclosure.

An SSD does much the same job functionally (e.g., saving your data while the system is off, booting your system, etc.) as an HDD, but instead of a magnetic coating on top of platters, the data is stored on interconnected flash memory chips that retain the data even when there's no power present. The chips can either be permanently installed on the system's motherboard (like on some small laptops and ultrabooks), on a PCI/PCIe card (in some high-end workstations), or in a box that's sized, shaped, and wired to slot in for a laptop or desktop's hard drive (common on everything else). These flash memory chips differ from the flash memory in USB thumb drives in the type and speed of the memory. That's the subject of a totally separate technical treatise, but suffice it to say that the flash memory in SSDs is faster and more reliable than the flash memory in USB thumb drives. SSDs are consequently more expensive than USB thumb drives for the same capacities.

A History of HDDs and SSDs
Solid State Drive
Hard-drive technology is relatively ancient (in terms of computer history, anyway). There are well-known pictures of the infamous IBM 350 RAMAC hard drive from 1956 that used fifty 24-inch-wide platters to hold a whopping 3.75MB of storage space. This, of course, is the size of an average 128Kbps MP3 file, in the physical space that could hold two commercial refrigerators. The IBM 350 was only ulitized by government and industrial users, and was obsolete by 1969. Ain't progress wonderful? The PC hard drive form factor standardized in the early 1980s, with the desktop-class 5.25-inch form factor, and with the 3.5-inch desktop-class and 2.5-inch notebook-class drives coming soon thereafter. The internal cable interface has changed from Serial to IDE to SCSI to SATA over the years, but it essentially does the same thing: connects the hard drive to the PC's motherboard so your data can be processed. Today's 2.5- and 3.5-inch drives use SATA interfaces almost exclusively (at least on most PCs and Macs). Capacities have grown from multiple megabytes to multiple terabytes, an increase of millions fold. Current 3.5-inch HDDs max out at 10TB, with 2.5-inch drives at 3TB max.

The SSD has a much more recent history. There was always an infatuation with non-moving storage from the beginning of personal computing, with technologies like bubble memory flashing (pun intended) and dying in the 1970s and '80s. Current flash memory is the logical extension of the same idea. The flash memory chips store your data and don't require constant power to retain that data. The first primary drives that we know as SSDs started during the rise of netbooks in the late 2000s. In 2007, the OLPC XO-1 used a 1GB SSD, and the Asus Eee PC 700 series used a 2GB SSD as primary storage. The SSD chips on low-end Eee PC units and the XO-1 were permanently soldered to the motherboard. As netbooks, ultrabooks, and other ultraportable laptop PCs became more capable, the SSD capacities increased, and eventually standardized on the 2.5-inch notebook form factor. This way, you could pop a 2.5-inch hard drive out of your laptop or desktop and replace it easily with an SSD. Other form factors emerged, like the mSATA miniPCIe SSD card, M.2 SSD, and the DIMM-like SSDs in the Apple MacBook Air, but today many SSDs are still built into the 2.5-inch form factor. The 2.5-inch SSD capsacity currently tops out at 4TB, but will undoubtedly grow as time goes by.

Advantages and Disadvantages
Both SSDs and HDDs do the same job: They boot your system, store your applications, and store your personal files. But each type of storage has its own unique feature set. The question is, what's the difference, and why would a user get one over the other? We break it down:

Price: To put it bluntly, SSDs are more expensive than HDDs in terms of rupees per GB. For the same capacity and form factor 1TB internal 2.5-inch drive, you'll pay about Rs 4,000 for an HDD, but as of this writing, an SSD shoots up to over Rs 60,00. Since HDDs are older, more established technologies, they will remain less expensive for the near future. Those extra ten-thousands may push your system price over budget.

Maximum and Common Capacity: As seen above, SSD units top out at 4TB, but those are still very rare and expensive. You're more likely to find 500GB to 1TB units as primary drives in systems. While 500GB is considered a "base" hard drive in 2015, pricing concerns can push that down to 128GB for lower-priced SSD-based systems. Multimedia users will require even more, with 1TB to 4TB drives as common in high-end systems. Basically, the more storage capacity, the more stuff (photos, music, videos, etc.) you can hold on your PC. While the (Internet) cloud may be a good place to share these files among your phone, tablet, and PC, local storage is less expensive, and you only have to buy it once.

Speed: This is where SSDs shine. An SSD-equipped PC will boot in seconds, certainly under a minute. A hard drive requires time to speed up to operating specs, and will continue to be slower than an SSD during normal use. A PC or Mac with an SSD boots faster, launches apps faster, and has faster overall performance. Witness the higher PCMark benchmark scores on laptops and desktops with SSDs, plus the much higher scores and transfer times for external SSDs versus HDDs. Whether it's for fun, school, or business, the extra speed may be the difference between finishing on time or failing.

Fragmentation: Because of their rotary recording surfaces, HDD surfaces work best with larger files that are laid down in contiguous blocks. That way, the drive head can start and end its read in one continuous motion. When hard drives start to fill up, large files can become scattered around the disk platter, which is otherwise known as fragmentation. While read/write algorithms have improved to the point that the effect is minimized, the fact of the matter is that HDDs can become fragmented, while SSDs don't care where the data is stored on its chips, since there's no physical read head. Thus, SSDs are inherently faster.

Durability: An SSD has no moving parts, so it is more likely to keep your data safe in the event that you drop your laptop bag or your system is shaken about by an earthquake while it's operating. Most hard drives park their read/write heads when the system is off, but they are flying over the drive platter at hundreds of miles an hour when they are in operation. Besides, even parking brakes have limits. If you're rough on your equipment, an SSD is recommended.

Availability: Hard drives are simply more plentiful. Look at the product lists from Western Digital, Toshiba, Seagate, Samsung, and Hitachi, and you'll see many more HDD models than SSDs. For PCs and Macs, internal HDDs won't be going away completely, at least for the next couple of years. You'll also see many more HDD choices than SSDs from different manufacturers for the same capacities. SSD model lines are growing in number, but HDDs are still in the majority for storage devices in PCs.

Form Factors: Because HDDs rely on spinning platters, there is a limit to how small they can be manufactured. There was an initiative to make smaller 1.8-inch spinning hard drives, but that's stalled at about 320GB, since the phablet and smartphone manufacturers have settled on flash memory for their primary storage. SSDs have no such limitation, so they can continue to shrink as time goes on. SSDs are available in 2.5-inch laptop drive-sized boxes, but that's only for convenience. As laptops become slimmer and tablets take over as primary Web-surfing platforms, you'll start to see the adoption of SSDs skyrocket.

Noise: Even the quietest HDD will emit a bit of noise when it is in use from the drive spinning or the read arm moving back and forth, particularly if it's in a system that's been banged about or in an all-metal system where it's been shoddily installed. Faster hard drives will make more noise than slower ones. SSDs make virtually no noise at all, since they're non-mechanical.

Overall: HDDs win on price, capacity, and availability. SSDs work best if speed, ruggedness, form factor, noise, or fragmentation (technically part of speed) are important factors to you. If it weren't for the price and capacity issues, SSDs would be the winner hands down.

As far as longevity goes, while it is true that SSDs wear out over time (each cell in a flash memory bank has a limited number of times it can be written and erased), thanks to TRIM command technology built into SSDs that dynamically optimizes these read/write cycles, you're more likely to discard the system for obsolescence before you start running into read/write errors. The possible exceptions are high-end multimedia users like video editors who read and write data constantly, but those users will need the larger capacities of hard drives anyway. Hard drives will eventually wear out from constant use as well, since they use physical recording methods. Longevity is a wash when it's separated from travel and ruggedness concerns.

Sunday 20 September 2015

What is HDR (High Dynamic Range)?

High dynamic range (HDR) video is one of the newest HDTV feature bullet points. It could push video content past the (now non-existant) limitations to which broadcast and other media standards have adhered to for decades. But adoption could be slow over the next few years because it's a complicated and somewhat esoteric feature.

Standard Dynamic Range
HDTV contrast is the difference between how dark and bright it can get. Dynamic range describes the extremes in that difference, and how much detail can be shown in between. Essentially, dynamic range is display contrast, and HDR represents broadening that contrast. However, just expanding the range between bright and dark is insufficient to improve a picture's detail. Whether a panel can reach 100 cd/m2 (relatively dim) or 500 cd/m2 (incredibly bright), and whether its black levels are 0.1 (washed out, nearly gray) or 0.005 (incredibly dark), it can ultimately only show so much information based on the signal it's receiving.

Current popular video formats, including broadcast television and Blu-ray discs, are limited by standards built around the physical boundaries presented by older technologies. Black is set to only so black, because as Christopher Guest eloquently wrote, it could get none more black. Similarly, white could only get so bright within the limitations of display technology. Now, with organic LED (OLED) and local dimming LED backlighting systems on newer LCD panels, that range is increasing. They can reach further extremes, but video formats can't take advantage of it. Only so much information is presented in the signal, and an HDTV capable of reaching beyond those limits still has to stretch and work with the information present.

What Is HDR?
That's where HDR video comes in. It removes the limitations presented by older video signals and provides information about brightness and color across a much wider range. HDR-capable displays can read that information and show an image built from a wider gamut of color and brightness. Besides the wider range, HDR video simply contains more data to describe more steps in between the extremes. This means that very bright objects and very dark objects on the same screen can be shown very bright and very dark if the display supports it, with all of the necessary steps in between described in the signal and not synthesized by the image processor.

To put it more simply, HDR content on HDR-compatible HDTVs can get brighter and darker at the same time, and show more shades of gray in between. Similarly, they can produce deeper and more vivid reds, greens, and blues, and show more shades in between. Deep shadows aren't simply black voids; more details can be seen in the darkness, while the picture stays very dark. Bright shots aren't simply sunny, vivid pictures; fine details in the brightest surfaces remain clear. Vivid objects aren't simply saturated; more shades of colors can be seen.

This requires much more data, and like ultra high-definition video, current optical media can't handle it. Blu-ray discs cannot hold HDR information. That will change over the next few years as the UHD Alliance pushes the Ultra HD Blu-ray standard. It's a disc type that can hold more data, and is built to contain 4K video, HDR video, and even object-based surround sound like Dolby Atmos. It could solve all of the distribution problems of 4K and HDR without requiring a very fast Internet connection. Online streaming will still be a valid way to offer 4K and HDR video, but Ultra HD, Blu-ray provides a physical and broadly accessible way to get it.

What You'll Need
Don't expect to see these discs on your Blu-ray player, though. While they're called Blu-ray discs, they still use different technology and different encoding standards to stuff all of that information onto the medium. You'll need an Ultra HD, Blu-ray player to use these new discs. We'll see if some players will be able to read this media with firmware upgrades in the future, but for now it seems that new players will be necessary.

You'll need an HDR-compatible HDTV, as well. HDR is not 4K. A 4K screen might support HDR, but that doesn't apply to all sets. If your HDTV doesn't support HDR, it won't take advantage of the additional information in the signal, and the panel isn't calibrated to handle that information if it was properly read. So, if you haven't picked up a 4K television yet, you might want to wait for an HDR-compatible one that fits your needs in the future. If you have, don't fret; HDR content is even newer and rare than 4K video, and we won't see it become widely available for a while. The HDR content situation was similar to the 4K video situation three years ago; what's out there is mostly there to show off the technology rather than present a really compelling, broad reason for consumers to adopt it just yet.

Where Is it Now?
Currently, Amazon, 20th Century Fox, Universal, BBC, LG, Broadcom, the UHD Alliance, the CEA, and other organizations are working on standards and distribution methods for HDR content. LG has been promoting its OLED televisions as being HDR-capable, as having Samsung with its own high-end LED televisions. HDR video remains very limited, with only some movies available through a very specific distribution method, like hard drives with the HDR films preloaded on them to ship with HDR-compatible HDTVs.

We'll be keeping an eye out for more HDR news and products as they come out. Expect new HDR displays and content to appear in January at CES 2016.