Welcome to the new Software Updater Update Checker! We're currently beta testing this exciting new addition to our website and have ...
Monday, 7 March 2016
Software
Coding Without a Net at Yahoo, Part Two
Photo: Press Association/AP Photo
In December, I reported on a frank discussion I’d had with Yahoo’s chief architect, Amotz Maimon, and the company’s senior vice president of science and technology, Jay Rossiter, on their decision to eliminate the quality assurance team. The idea, they said, was to force engineers to develop tools to better check their own code, and to think about their jobs differently, as part of a larger effort to shift the company’s software development approach from batch releases to continuous delivery. Maimon told me the approach was “100 percent working,” and Rossiter said it had reduced, rather than increased, the number of problems that went live.
That post triggered a lengthy and sometimes heated discussion—in the comments on the post itself, as well as on Slashdot and on Hacker News—about the role of quality assurance in software development today. The commenters had much to say about their own experiences, about quality assurance pro and con, and about Yahoo’s products. A few examples:
“They didn't STOP testing, they just automated it. Our company did the same years ago. Literally a one button, no monitoring process to: build across multiple architectures, test for several hours across each of them, package up the releases and (with a second button press) release to the web site. This is not hard, it just requires commitment to keep it maintained and to acknowledge it does not come for free (you can't just fire your QA time and expect the engineers to develop it in their free time).”
“The point is not to ‘remove QA’, but quite on the contrary, to remove the BARRIER between engineering and QA, to shorten the feedback and accountability loop. More, better QA, with less overhead.”
“This is the most stupid thing ever.... Of course there will be fewer bugs found if there are no testers!!! Doesn't mean to say that they aren't in the software!!!!”
“I have been using Yahoo and wondered how come I started facing issues in using the emails. Now I got the answer.’”
The commenters also had some key questions. I went back to Yahoo’s Maimon for answers.
Q: Given that developers are now doing their own testing, were their project loads changed to allow time for this?
Maimon: We asked developers to invest in test automation, not manual testing. There was an initial effort we executed without any major schedule changes. This stemmed from the work of our (former) QA team, who developed the test automation process. When compared with our manual testing efforts, our automated testing process increased overall speed and quality of results, which enabled us to avoid any significant impact. By eliminating the slow manual testing from the pipeline, we were able to increase our overall speed and productivity. Moving to continuous delivery also lowered the "unit of change", or size of changes pushed to production. We pushed multiple changes a day, but each change was smaller and simpler, which reduced complexity and risk in the release process, while it improved quality.
Q: Did Yahoo need to add developers?
Maimon: A certain portion of the QA people converted to developers, but we did not need to grow the organization further as a result of the change. Since productivity went up, we were able to get more done with the same amount of people.
Q: Do you have any data/numbers to back up claims that the change made for fewer errors and a faster development cycle?
Maimon: We measure all of these, but cannot release the actual numbers. The number of software updates pushed to production systems went up by four to five times; the overall number of incidents went down, as did the number of change-related incidents—that is, something that happens when a software change that’s pushed to production causes a failure. Overall, the relative number of software change-related failures went down by about 80 percent.
Q: Finally, do you have any evidence that the change made the development job “more fun?”
Maimon: Developers like speed, fast exposure of new development, and fast real-user feedback. As such, they liked the change once the initial effort was done.
Nervana Systems Puts Deep Learning AI in the Cloud
By Jeremy Hsu
Adam Waytz
Deep learning is Silicon Valley’s latest and greatest attempt at training artificial intelligence to understand the world by sifting through huge amounts of data. A startup called Nervana Systems aims to make AI based on deep learning neural networks even more widely available by turning it into a cloud service for any industry that has Big Data problems to solve.
“Our goal is to really be the world’s platform on which you do artificial intelligence,” says Naveen Rao, cofounder and CEO of Nervana.
Nervana has a different philosophy than many other deep learning startups who are “trying to compete head to head with the Googles of the world,” Rao says. He thinks it will be tough for even the scrappiest startups to compete directly with Google’s huge data science team at tackling the toughest computational problems. Instead, Nervana wants to develop and sell deep-learning AI as a service for the many companies that have more mundane Big Data problems but lack the data scientists and deep learning tools to handle them.
“From the start, we’ve been focused on making deep learning fast, easy to use and scaling,” Rao says.
On 29 February, Nervana officially debuted its Nervana Cloud service. Nervana Cloud is a hosted hardware and software platform that allows any organization to develop its own deep learning solutions tailored to the specific problems of its industry, be it healthcare, agriculture, finance, energy, or something else. The cloud service also promises much speedier solutions than competing AI cloud platforms—up to 10 times faster.
Nervana’s deep learning AI has already been at work for several companies.Blue River Technology is a precision agriculture company that uses computer vision and robots to improve farming efficiency by removing unwanted plants and making decisions based on the condition of individual crop plants. By using Nervana’s deep learning service, Blue River managed to improve the reliability of its robots’ ability to detect individual plants.
In another case, a company called Paradigm used Nervana Cloud to more accurately detect underground features within 3-D images that could indicate good locations for oil drilling. The improved accuracy translated into more efficient drilling decisions that reduced the time and money wasted on locations that may not yield worthwhile oil deposits.
“Nervana Cloud enables customers to leverage their own data, find insights in their own data and use them to their advantage,” Rao explains. “Our platform allows you to build custom solutions for enterprise problems.”
Nervana’s deep learning software currently runs on NVIDIA GPU chips. But for the long run, the startup is developing its own optimized hardware. Either way, the Nervana Cloud acts as the main doorway for client companies to access such deep learning resources.
Rao described Nervana’s new architecture for deep learning as possibly involving multiple specialized chips that could work together in concert. That would eventually enable much larger versions of the brain-inspired neural networks that form the foundation of deep learning AI, he says. Such larger neural networks could enable deep learning AI to sift through more challenging Big Data problems involving high-resolution images and video.
Founded in 2014, Nervana has about 42 employees split between an original office in San Diego and a somewhat larger branch in Palo Alto, Calif. That makes the company a fairly lean operation in a deep learning field filled with tech giants such as Google and IBM. Still, the startup has already raised about US $28 million in seed funding from venture capital firms such as DFJ, DCVC, Allen & Co, AME Cloud Ventures, Playground Global, CME Group, Fuel Capital, Lux Capital, and Omidyar Network.
There’s a lot of low-hanging fruit for deep learning AI to pluck in terms of the more common Big Data problems facing many different industries, says Rao. But to thrive, Nervana still needs to figure out how to sell its deep learning cloud platform as a valued service to more companies like Blue River and Paradigm.
How To Kill A Supercomputer: Dirty Power, Cosmic Rays, and Bad Solder
Will future exascale supercomputers be able to withstand the steady onslaught of routine faults?
As a child, were you ever afraid that a monster lurking in your bedroom would leap out of the dark and get you? My job at Oak Ridge National Laboratory is to worry about a similar monster, hiding in the steel cabinets of the supercomputers and threatening to crash the largest computing machines on the planet.
The monster is something supercomputer specialists call resilience—or rather the lack of resilience. It has bitten several supercomputers in the past. A high-profile example affected what was the second fastest supercomputer in the world in 2002, a machine called ASCI Q at Los Alamos National Laboratory. When it was first installed at the New Mexico lab, this computer couldn’t run more than an hour or so without crashing.
The ASCI Q was built out of AlphaServers, machines originally designed by Digital Equipment Corp. and later sold by Hewlett-Packard Co. The problem was that an address bus on the microprocessors found in those servers was unprotected, meaning that there was no check to make sure the information carried on these within-chip signal lines did not become corrupted. And that’s exactly what was happening when these chips were struck by cosmic radiation, the constant shower of particles that bombard Earth’s atmosphere from outer space.
To prove to the manufacturer that cosmic rays were the problem, the staff at Los Alamos placed one of the servers in a beam of neutrons, causing errors to spike. By putting metal side panels on the ASCI Q servers, the scientists reduced radiation levels enough to keep the supercomputer running for 6 hours before crashing. That was an improvement, but still far short of what was desired for running supercomputer simulations.
Illustration: Shaw Nielsen
An even more dramatic example of cosmic-radiation interference happened at Virginia Tech’s Advanced Computing facility in Blacksburg. In the summer of 2003, Virginia Tech researchers built a large supercomputer out of 1,100 Apple Power Mac G5 computers. They called it Big Mac. To their dismay, they found that the failure rate was so high it was nearly impossible even to boot the whole system before it would crash.
The problem was that the Power Mac G5 did not have error-correcting code (ECC) memory, and cosmic ray–induced particles were changing so many values in memory that out of the 1,100 Mac G5 computers, one was always crashing. Unusable, Big Mac was broken apart into individual G5s, which were sold one by one online. Virginia Tech replaced it with a supercomputer called System X, which had ECC memory and ran fine.
Cosmic rays are a fact of life, and as transistors get smaller, the amount of energy it takes to spontaneously flip a bit gets smaller, too. By 2023, when exascale computers—ones capable of performing 1018 operations per second—are predicted to arrive in the United States, transistors will likely be a third the size they are today, making them that much more prone to cosmic ray–induced errors. For this and other reasons, future exascale computers will be prone to crashing much more frequently than today’s supercomputers do. For me and others in the field, that prospect is one of the greatest impediments to making exascale computing a reality.
Just how many spurious bit flips are happening inside supercomputers already? To try to find out, researchers performed a study [PDF] in 2009 and 2010 on the then most powerful supercomputer—a Cray XT5 system at Oak Ridge, in Tennessee, called Jaguar.
Jaguar had 360 terabytes of main memory, all protected by ECC. I and others at the lab set it up to log every time a bit was flipped incorrectly in main memory. When I asked my computing colleagues elsewhere to guess how often Jaguar saw such a bit spontaneously change state, the typical estimate was about a hundred times a day. In fact, Jaguar was logging ECC errors at a rate of 350 per minute.
Data source: Los Alamos National Laboratory
Failure Not Optional: Modern supercomputers are so large that failures are expected to occur regularly. In 2006, the Red Storm supercomputer at Sandia National Laboratories typically suffered a handful of system interruptions each day, for example.
In addition to the common case of a single cosmic ray flipping a single bit, in some cases a single high-energy particle cascaded through the memory chip flipping multiple bits. And in a few cases the particle had enough energy to permanently damage a memory location.
ECC can detect and correct a single-bit error in one word of memory (typically 64 bits). If two bits are flipped in a word, ECC can detect that the word is corrupted, but cannot fix it. The study found that double-bit errors occurred about once every 24 hours in Jaguar’s 360 TB of memory.
The surface area of all the silicon in a supercomputer functions somewhat like a large cosmic-ray detector. And as that surface area grows, the number of cosmic-ray strikes also grows. Exascale systems are projected to have up to 100 petabytes of memory—50 times as much as today’s supercomputers—resulting in that much more real estate for a cosmic-ray particle to hit.
But resilience is not all about bit flips and cosmic rays. Even the simplest components can cause problems. The main resilience challenge for Jaguar was a voltage-regulator module. There were 18,688 of them, and whenever one failed, a board carrying two of the machine’s 37,376 hex-core processors powered off.
Two lost processors wasn’t the issue—Jaguar would automatically detect the malfunction and reconfigure the system to work without the problematic board. But that board also contained a network-communication chip, which all other such boards in the system depended on to route messages. When this board powered down, the system would continue to run a while, but it would eventually hang, requiring a reboot of the entire supercomputer to reset all the board-to-board routing tables. While today’s supercomputers do dynamic routing to avoid such failures, the growing complexity of these computing behemoths is increasing the chances that a single fault will cascade across the machine and bring down the entire system.
Photos: Oak Ridge National LaboratoryReduce, Reuse, Recycle: When your supercomputer starts showing its age, you have to do something or else the cost of the electricity to run it won’t be worth the results you obtain. But that doesn’t mean you need to throw it out. In 2011 and 2012, Oak Ridge National Laboratory upgraded its Jaguar supercomputer, first installed in 2005, transforming it into a far more capable machine called Titan [see table above]. The effort, as shown in these photos, was extensive, but it made Titan No. 1 in the world for a time.
Supercomputer operators have had to struggle with many other quirky faults as well. To take one example: The IBM Blue Gene/L system at Lawrence Livermore National Laboratory, in California, the largest computer in the world from 2004 to 2008, would frequently crash while running a simulation or produce erroneous results. After weeks of searching, the culprit was uncovered: the solder used to make the boards carrying the processors. Radioactive lead in the solder was found to be causing bad data in the L1 cache, a chunk of very fast memory meant to hold frequently accessed data. The workaround to this resilience problem on the Blue Gene/L computers was to reprogram the system to, in essence, bypass the L1 cache. That worked, but it made the computations slower.
So the worry is not that themonster I’ve been discussing will come out of the closet. It’s already out. The people who run the largest supercomputers battle it every day. The concern, really, is that the rate of faults it represents will grow exponentially, which could prevent future supercomputers from running long enough for scientists to get their work done.
Several things are likely to drive the fault rate up. I’ve already mentioned two: the growing number of components and smaller transistor sizes. Another is the mandate to make tomorrow’s exascale supercomputers at least 15 times as energy efficient as today’s systems.
To see why that’s needed, consider the most powerful supercomputer in the United States today, a Cray XK7 machine at Oak Ridge called Titan. When running at peak speed, Titan uses 8.2 megawatts of electricity. In 2012, when it was the world’s most powerful supercomputer, it was also the third most efficient in terms of floating-point operations per second (flops) per watt. Even so, scaled up to exaflop size, such hardware would consume more than 300 MW—the output of a good-size power plant. The electric bill to run such a supercomputer would be about a third of a billion dollars per year.
No wonder then that the U.S. Department of Energy has announced the goal of building an exaflop computer by 2023 that consumes only 20 MW of electricity. But reducing power consumption this severely could well compromise system resilience. One reason is that the power savings will likely have to come from smaller transistors running at lower voltages to draw less power. But running right at the edge of what it takes to make a transistor switch on and off increases the probability of circuits flipping state spontaneously.
Further concern arises from another way many designers hope to reduce power consumption: by powering off every unused chip, or every circuit that’s not being used inside a chip, and then turning them on quickly when they’re needed. Studies done at the University of Michigan in 2009 found that constant power cycling reduced a chip’s typical lifetime up to 25 percent.
Power cycling has a secondary effect on resilience because it causes voltage fluctuations throughout the system—much as a home air conditioner can cause the lights to dim when it kicks on. Too large of a voltage fluctuation can cause circuits to switch on or off spontaneously inside a computer.
Using a heterogeneous architecture, such as that of Titan, which is composed of AMD multicore CPUs and Nvidia GPUs (graphics processing units), makes error detection and recovery even harder. A GPU is very efficient because it can run hundreds of calculations simultaneously, pumping huge amounts of data through it in pipelines that are hundreds of clock cycles long. But if an error is detected in just one of the calculations, it may require waiting hundreds of cycles to drain the pipelines on the GPU before beginning recovery, and all of the calculations being performed at that time may need to be rerun.
So far I’ve discussed how hard it will be to design supercomputer hardware that is sufficiently reliable. But the software challenges are also daunting. To understand why, you need to know how today’s supercomputer simulations deal with faults. They periodically record the global state of the supercomputer, creating what’s called a checkpoint. If the computer crashes, the simulation can then be restarted from the last valid checkpoint instead of beginning some immense calculation anew.
Data source: Los Alamos National Laboratory
A Looming Crisis: As systems get larger, the time it takes to save the state of memory will exceed the time between failures, making it impossible to use the previous “checkpoint” to recover from errors.
This approach won’t work indefinitely, though, because as computers get bigger, the time needed to create a checkpoint increases. Eventually, this interval will become longer than the typical period before the next fault. A challenge for exascale computing is what to do about this grim reality.
Several groups are trying to improve the speed of writing checkpoints. To the extent they are successful, these efforts will forestall the need to do something totally different. But ultimately, applications will have to be rewritten to withstand a constant barrage of faults and keep on running.
Unfortunately, today’s programming models and languages don’t offer any mechanism for such dynamic recovery from faults. In June 2012, members of an international forum composed of vendors, academics, and researchers from the United States, Europe, and Asia met and discussed adding resilience to message-passing interface, or MPI, the programming model used in nearly all supercomputing code. Those present at that meeting voted that the next version of MPI would have no resilience capabilities added to it. So for the foreseeable future, programming models will continue to offer no methods for notification or recovery from faults.
One reason is that there is no standard that describes the types of faults that the software will be notified about and the mechanism for that notification. A standard fault model would also define the actions and services available to the software to assist in recovery. Without even a de facto fault model to go by, it was not possible for these forum members to decide how to augment MPI for greater resilience.
So the first order of business is for the supercomputer community to agree on a standard fault model. That’s more difficult than it sounds because some faults might be easy for one manufacturer to deal with and hard for another. So there are bound to be fierce squabbles. More important, nobody really knows what problems the fault model should address. What are all the possible errors that affect today’s supercomputers? Which are most common? Which errors are most concerning? No one yet has the answers.
And while I’ve talked a lot about faults causing machines to crash, these are not, in fact, the most dangerous. More menacing are the errors that allow the application to run to the end and give an answer that looks correct but is actually wrong. You wouldn’t want to fly in an airliner designed using such a calculation. Nor would you want to certify a new nuclear reactor based on one. These undetected errors—their types, rates, and impact—are the scariest aspect of supercomputing’s monster in the closet.
Given all the gloom and doomI’ve shared, you might wonder: How can an exascale supercomputer ever be expected to work? The answer may lie in a handful of recent studies for which researchers purposely injected different types of errors inside a computer at random times and locations while it was running an application. Remarkably enough, 90 percent of those errors proved to be harmless.
One reason for that happy outcome is that a significant fraction of the computer’s main memory is usually unused. And even if the memory is being used, the next action on a memory cell after the bit it holds is erroneously flipped may be to write a value to that cell. If so, the earlier bit flip will be harmless. If instead the next action is to read that memory cell, an incorrect value flows into the computation. But the researchers found that even when a bad value got into a computation, the final result of a large simulation was often the same.
Errors don’t, however, limit themselves to data values: They can affect the machine instructions held in memory, too. The area of memory occupied by machine instructions is much smaller than the area taken up by the data, so the probability of a cosmic ray corrupting an instruction is smaller. But it can be much more catastrophic. If a bit is flipped in a machine instruction that is then executed, the program will most likely crash. On the other hand, if the error hits in a part of the code that has already executed, or in a path of the code that doesn’t get executed, the error is harmless.
There are also errors that can occur in silicon logic. As a simple example, imagine that two numbers are being multiplied, but because of a transient error in the multiplication circuitry, the result is incorrect. How far off it will be can vary greatly depending on the location and timing of the error.
As with memory, flips that occur in silicon logic that is not being used are harmless. And even if this silicon is being used, any flips that occur outside the narrow time window when the calculation is taking place are also harmless. What’s more, a bad multiplication is much like a bad memory value going into the computation: Many times these have little or no affect on the final result.
So many of the faults that arise in future supercomputers will no doubt be innocuous. But the ones that do matter are nevertheless increasing at an alarming rate. So the supercomputing community must somehow address the serious hardware and software challenges they pose. What to do is not yet clear, but it’s clear we must do something to prevent this monster from eating us alive.
Digital Baby Project's Aim: Computers That See Like Humans
By Jeremy Hsu
Photo: Paul Biris/Getty Images
Can artificial intelligence evolve as human baby does, learning about the world by seeing and interacting with its surroundings? That’s one of the questions driving a huge cognitive psychology experiment that has revealed crucial differences in how humans and computers see images.
The study has tested the limits of human and computer vision by examining each one’s ability to recognize partial or fuzzy images of objects such as airplanes, eagles, horses, cars, and eyeglasses. Unsurprisingly, human brains proved far better than computers at recognizing these “minimal” images even as they became smaller and harder to identify. But the results also offer tantalizing clues about the quirks of human vision—clues that could improve computer vision algorithms and eventually lead to artificial intelligence that learns to understand the world the way a growing toddler does.
“The study shows that human recognition is both different and better performing compared with current models,” said Shimon Ullman, computer scientist at the Weizmann Institute of Science in Rehovot, Israel. “We think that this difference [explains the inability] of current models to analyze automatically complex scenes—for example, getting details about actions performed by people in the image, or understanding social interactions between people.”
Human brains can identify partial or fuzzy minimal images based on certain “building block” features in known objects, Ullman explained. By comparison, computer vision models or algorithms do not seem to use such building block knowledge. The details of his team’s research were published today in the online issue of the journal Proceedings of the National Academy of Sciences.
The study involved more than 14,000 human participants, tested on 3,553 image patches. Such a staggering number of participants made it completely impractical to bring each person into the lab. Instead, Ullman and his colleagues crowdsourced their experiment to thousands of online workers through the service known as Amazon Mechanical Turk. The researchers then verified the online results by comparing them to a much smaller group of human volunteers in the lab.
Human brains easily outperformed the computer vision algorithms tested in the study. But an additional twist in the findings may highlight a key difference between how the human brain and computer vision algorithms decode images. The testing showed a sudden drop in human recognition of minimal images when slight changes make the images too small or fuzzy to identify. Human volunteers’ were able to identify baseline “minimal” images about 65 percent of the time. But when images were made even smaller or more blurry, recognition levels dropped below 20 percent. By comparison, the computer algorithms generally performed worse than human recognition, but did not show a similar “recognition gap” in performance as the images became smaller or fuzzier.
Such results suggest that the human brain relies upon certain learning and recognition mechanisms that computer algorithms lack. Ullman and his colleagues suspect that the results can be explained by one particular difference between the brain and computer vision algorithms.
Today’s computer vision models rely on a “bottom-up” approach that filters images based on the simplest features possible before moving on to identify them by more complex features. But human vision does not rely on just the bottom-up approach. The human brain also works “top-down,” comparing a standard model of certain objects with a particular object that it’s trying to identify.
“This means, roughly, that the brain stores in memory a model for each object type, and can use this internal model to ‘go back’ to the image, and search in it specific features and relations between features, which will verify the existence of the particular object in the image,” Ullman explained. “Our rich detailed perception appears to arise from the interplay between the bottom-up and top-down processes.”
That top-down human brain approach could inspire new computer models and algorithms capable of developing a complex understanding of the world through what they see. To that point, Ullman’s research received some funding through a “Digital Baby” project grant provided by the European Research Council. His group also received backing through the U.S. National Science Foundation’s support of the Center for Brains, Minds and Machinesat MIT and other universities. One of the major research goals of the center is “Reverse Engineering the Infant Mind.”
Ullman envisions an artificial intelligence that starts out without detailed knowledge of the world and has sophisticated learning capabilities through vision and interaction:
As a baby, you open your eyes, see flickering pixels, and somehow it all comes together and you know something about the world. You’re starting from nothing, absorbing information and getting a rich view of the world. We were thinking about what would it take to get a computer program where you put in the minimal structures you need and let it view videos or the world for six months. If you do it right, you can get an interesting system.
Even better computer vision could someday enable Siri or Cortana, the virtual assistants in personal smartphones and tablets, to recognize human expressions or social interactions. It could also empower technologies such as self-driving cars or flying drones, making them better able to recognize the world around them. For example, driverless car researchers have been working hard to improve the computer vision algorithms that enable robot cars to quickly recognize pedestrians, cars, and other objects on the road.
On the human side, the study offers a new glimpse of how the human brain sees the world. Such research helps bridge the gap between brain science and computer science, Ullman said. And that could hugely benefit both humans and machines.
“We would like to combine psychological experiments with brain imaging and brain studies in both humans and animals to uncover the features and mechanisms involved in the recognition of minimal images, and their use in the understanding of complex scenes,” Ullman said. “Through this, we also hope to better understand the use of top-down processing in both biological and computer systems.”
Did Stephen Curry Inspire ESPN’s Virtual 3-Point Line?
By Tekla S. Perry
Image: ESPN
Nearly 20 years ago, ESPN, the sports broadcasting network, began displaying a yellow virtual first down line when broadcasting football games on television. Developed by Sportvision, a small Silicon Valley company, that yellow line initially mystified fans: “Is it on the field, or not?” millions of viewers wondered.
These days, we can’t imagine watching football on TV without knowing exactly where that first-down line is. And that technology spawned a host of virtual graphics that augment sports action for onscreen viewers—most recently, the America’s Cup races. (Sportvision founder Stan Honey detailed that technology in “The Augmented America’s Cup.”)
Tonight, a new virtual line hits the TV screen: a virtual 3-point line for basketball, debuting with the tipoff of ABC's primetime broadcast of a National Basketball Association game pitting the San Antonio Spurs against the Cleveland Cavaliers. Unlike football, where the line indicating how far a team has to advance the ball in order to earn a fresh set of downs is constantly moving, the 3-point line is painted on the basketball court and never moves. But after the Golden State Warriors won last year's NBA championship and rattled off 24 consecutive wins this season before suffering their first loss—feats in no small measure due to the long-distance shooting wizardry of Warriors point guard Stephen Curry—ESPN decided to put 3-point shots in a virtual spotlight and make it clear immediately whether any attempt is successful.
The network’s “Virtual 3” technology lights up the line for every 3-point shot attempt. The illumination is turned off immediately if the player misses; if the ball goes in the basket, the line remains lit up until the ball is handed over to the other team. ESPN developed the technology in house, at the company’s Princeton Visual Technology Lab.
According to an interview with Jed Drake, ESPN’s vice president of production innovation, that was published on the company’s media site, creating the virtual line was a tricky proposition. Unlike the first down line in football, there is an existing 3-point line in the real world, so the virtual line must mask it exactly—and it can be just a few pixels wide when seen on screen. And, Drake said, the curve of the 3-point line added a challenge (though, to be fair, a football field has curves the virtual yellow line must also follow).
There have been recent advances in technology that allow cameras to be tracked through video analysis instead of the use of on-camera sensors. These, say experts, could make implementing augmented video features for sports productions cheaper and easier. But according to Ken Milnes, who worked on much of Sportvision’s groundbreaking technology and is now a consultant, technological advance was not likely the impetus to this new development. Rather, he said, it’s the need to help the producers tell a particular story—and the Warriors’ success in 3-point shooting is definitely the story in basketball these days.
ESPN’s Drake confirms that bringing the 3-point line from idea to implementation was an eight-month project. Do the math: it’s been just about eight months since the beginning of the 2015 NBA finals that the Warriors won, in large part due to Curry’s 3-Point shooting.
Coincidence? I doubt it.
So, if you’re watching today’s Spurs-Cavaliers game, be sure to take notice of the Virtual 3 “Curry” line.
Taiwan Neglects Supercomputing
The chip leader’s fastest machine has slipped off the list of the 500 most powerful supercomputers
By Yu-Tzu Chiu
Photo: NCHC Windrider Aground: Taiwan’smost powerful computer has fallen off the Top 500 list.
A quick glance at thenew ranking of top supercomputers reveals a surprising showing by one of the world’s technological powerhouses: Taiwan does not possess a single machine powerful enough to make the Top500.org list. While there are many nations that don’t make the list, Taiwan is peculiar in that it has such an outsized grip on the computer chip industry. What’s more, its political rival, China, not only has the world’s top machine, it now has more ranking supercomputers than any nation except the United States.
It has been a long decline. Taiwan’s most powerful supercomputer, the Advanced Large-scale Parallel Supercluster, also known as ALPS or Windrider, ranked 42nd in June 2011, shortly after its launch.
But the process of upgrading Taiwan’s supercomputing infrastructure has been slowed by ineffective government budget allocation. Since 2013, the National Center for High-performance Computing (NCHC), located in Hsinchu City, which operates Windrider, has failed twice to get enough of a budget boost to strengthen its supercomputing ability. While other countries poured money into the installation of powerful supercomputers as a way to show national power, Windrider fell to 303rd and then 445th in June 2014 and June 2015.
“If our three-year budget proposal is approved early [in 2016], Taiwan would gain a much better position on the Top 500 in 2018, when a 2-petaflops system is launched,” says Jyun-Hwei Tsai, deputy director general of NCHC. If such a system were launched today, it would rank 36th.
Officials at the Ministry of Science and Technology say they have prioritized supercomputing in their annual budget proposal—as they did in 2013 and in 2014. However, it’s really up to “the Cabinet,” the executive branch of the Taiwanese government.
Cabinet spokesman Sun Lih-chyun says the government fully understands the importance of supercomputing and points out that Taiwan has promoted cloud computing and big-data projects. “It remains uncertain when sufficient budget would be made available for new systems. We’re still reviewing the budget proposal. The decision has not yet been made,” Sun says.
“The Cabinet will make a final decision early [this] year,” adds Tzong-Chyuan Chen, director general of the Department of Foresight and Innovation Policies under the ministry. “In economic recession years, it’s difficult to gain budget for important science and technology projects with long-term impacts, which are not yet felt.”
It wasn’t always like this. In June 2002, an IBM system at the NCHC center ranked 60th. In June 2007, the center’s newest system, called Iris, ranked 35th.
Iris’s place on the list wasn’t long-lived. It was displaced by November 2009 due to a boom in supercomputer installations in many other countries, such as China. The huge increase in China’s supercomputing power in recent years can be attributed in part to some government-backed companies, such asSugon Information Industry Co. and Inspur Group Co., which together manufactured 64 of the ranked systems.
According to NCHC’s Tsai, the big strides taken by other countries is a sore point in Taiwan. “We don’t compare ourselves with big countries, such as China, Japan, and the United States. What frustrates us more is that, in South Korea, the momentum of national supercomputing is now stronger than ours,” he says. Currently, South Korea’s two fastest systems rank 29th and 30th.
It’s not as if there isn’t much demand for supercomputing in Taiwan. Currently, Taiwan’s Windrider utilization exceeds 80 percent. “It’s like a crowded superhighway. And we’ve heard complaints from some users,” Tsai says.
According to Tsai, Windrider is most significantly used in basic physics, chemistry, and biomedical imaging. But certain key fields get prioritized access. Those include environmental studies, climate change, and natural disasters.
“Taiwan is prone to natural disasters, such as typhoons, floods, and earthquakes. A powerful database, backed by powerful supercomputing systems, is essential for conducting better predictions of typhoons,” Tsai says.
Due to the limitations of Taiwan’s supercomputing capability, some scientists have taken to building their own computer clusters and speeding up existing resources with graphics processing unit-based accelerators.
Tzihong Chiueh, a theoretical astrophysicist at National Taiwan University, in Taipei, says he and his colleagues there have not relied on NCHC’s system for years. Chiueh, whose team has since 2013 been taking advantage of a self-built system that can reach tens of teraflops, says, “The investment [in a petaflops-scale system] should indeed be prioritized. I hope it can work at least 10 times faster than the current system.”
Tom Cruise would have looked much less cool in the 2002 film Minority Report if he’d swiped through images on his computer display with gloves that required clunky data cables or heavy battery packs. A real-world glove promises to bring that sleek Minority Report–style future one step closer by harvesting energy from the wearer’s finger motions.
The prototype glove, called “GoldFinger,” uses piezoelectric transducers that convert the mechanical motions of the glove user’s fingers into electricity. It doesn’t generate enough power to keep the glove’s battery fully charged during typical usage, but shows how the technology could boost the battery charge or potentially reduce the battery’s size. Italian and U.S. researchers who developed the glove sewed electrically-conductive filaments into its nylon fabric to ensure maximum flexibility for the wearer—a crucial factor for a human-machine interface (HMI) glove intended to control computer or virtual displays.
”The use of a glove requires comfort and reliability and these requirements are not less important than the increased energetic autonomy of the device,” says Giorgio De Pasquale, a mechanical and aerospace engineer at the Polytechnic University of Turin, in Italy. “This also makes the difference between GoldFinger and other HMI gloves that use wires to send data, or large and heavy batteries for the supply.”
HMI gloves were first proposed in the 1970s and 1980s, and the earliest commercialized versions appeared in the 1990s for virtual modeling and certain medical applications. But along with those devices came user limitations in the form of stiff materials, rigid electronic components, inelegant data cables providing wired communication, and bulky power supplies.
GoldFinger, with its focus on freedom and flexibility, represents the latest, user-friendly generation of HMI gloves. All of the rigid electronic components and the battery are tucked inside an aluminum case located on the backside of the glove.
An optical port located on each finger emits LED light, allowing a computer to track the GoldFinger glove’s motions. De Pasquale’s younger brother, Daniele, a master’s degree candidate in computer engineering at the school in Turin, wrote the software that translates the finger gestures into computer system commands. (The current GoldFinger prototype, which serves as a proof of concept, has just one optical port.)
The Italian siblings-turned-researchers, who also enlisted the help of Sang-Gook Kim, a mechanical engineer at MIT, presented a paper detailing their research on 3 December at the PowerMEMS 2015 conference in Boston.
To test the glove’s energy-harvesting power, they opened and closed gloved fingers for 10 seconds. The motion generated an average of about 32 microwatts of power—enough for the glove’s optical port to operate for about half a minute per hour without drawing on any battery power at all. GoldFinger's’s current battery can keep the optical port powered continuously for 104 hours.
The energy-harvesting system may not sound impressive on the surface, but could realistically help extend the glove’sbattery charge during periods of low or medium usage. Its creators envision the glove primarily helping workers in industrial plants who might use it to occasionally interface with machinery in the midst of their normal work routines.
Specifically, GoldFinger would likely activate its optical port(s) during less than 10 percent of the normal workday. Such relatively low usage, say the glove’s developers, would allow the finger motions to significantly extend the battery life. It would last about 14 percent longer if the glove’s optical port is active for 5 percent of a shift. Energy harvesting would boost the battery life by up to 70 percent if the glove is active only 1 percent of the time. And that’s just the current prototype.
The lab version of GoldFinger already has a design “very close to a commercial version”; it accounts for production issues such as process and component availability, Giorgio De Pasquale says. But he and his colleagues plan to continue boosting the performance of the current prototype and working on potential spinoff devices.
HMI gloves have already found uses among a number of large companies. Automaker Daimler-Benz used its own version of “data gloves” to enable workers to grasp and manipulate virtual objects inside the passenger cabin of a virtual reality car. Technicians at aerospace giant Boeing have also used such gloves to simulate maintenance tasks on aircraft. Even pilots have gotten their hands in such gloves as part of cockpit simulations.
GoldFinger and similar gloves could also prove handy in applications such as design and 3-D modeling, allowing robot operators to remotely control the claws and other appendages of their machines, or giving surgeons a wearable remote tool for accessing crucial medical images or data.
“The primary goal of future-inspired researchers is to make available borderline technologies, even without sharply targeted applications,” the elder De Pasquale says. “The existence of the technology will push its demand and the demand will push the technology improvement.”
Yahoo’s Engineers Move to Coding Without a Net
By Tekla Perry
Photo: Andrew Harrer/Bloomberg/Getty Images
What happens when you take away the quality assurance team in a software development operation? Fewer, not more errors, along with a vastly quicker development cycle.
That, at least, has been the experience at Yahoo, according to Amotz Maimon, the company’s chief architect, and Jay Rossiter, senior vice president of science and technology. After some small changes in development processes in 2013, and a larger push from mid-2014 to the first quarter of 2015, software engineering at Yahoo underwent a sea change. The effort was part of a program Yahoo calls Warp Drive: a shift from batch releases of code to a system of continuous delivery. Software engineers at Yahoo are no longer permitted to hand off their completed code to another team for cross checking. Instead, the code goes live as-is; if it has problems, it will fail and shut down systems, directly affecting Yahoo’s customers.
“Doing that,” Rossiter told me, “caused a paradigm shift in how engineers thought about problems.”
It has also, he said, forced engineers to develop tools to automate the kinds of checks previously handled by teams of humans. An engineer might go through an arduous process of checking code once—but then would start building tools to automate that process.
I met with Maimon and Rossiter at Yahoo’s annual TechPulse conference on Tuesday in Santa Clara. This private get together gives some 850 of Yahoo’s researchers and engineers an opportunity to publicize their projects by presenting papers and participating in poster sessions.
It was an odd time to be surrounded by Yahoo’s tech staff—all of whom were focused on software developments—because in that day’s newspapers and in news reports I heard on the car radio as I drove to the meeting, rumors swirled about Yahoo’s imminent restructuring. The researchers believe that any change will take some time to affect their operations, so they continue on, business as usual. (There may have been more of a buzz about the company’s future the following day, when Yahoo announced that it had decided to go forward with a reverse spinoff: that is, transferring all its businesses and liabilities except for its stake in China’s Alibaba group to a new company.)
Those structural and financial maneuvers notwithstanding, Yahoo’s decision to take away the safety net the company’s software engineers had come to rely upon was big news. The shift wasn’t easy, Rossiter recalled. It required some tough parenting, with no exceptions, he says. “People would come in and say I’m special, I’m working in UI, I’m on the back end, I’m this, I’m that.” But by consistently refusing to give any concessions, it forced a rethink. “We said ‘No more training wheels,’ and it made a huge difference. We forced excellence into the process.”
“It was not without pain,” Maimon says—though the problems were not as big as he feared. “We expected that things would break, and we would have to fix them. But the error that had been introduced by humans in the loop was larger than what was exposed by the new system.”
“It turns out,” Rossiter chimed in, “that when you have humans everywhere, checking this, checking that, they add so much human error into the chain that, when you take them out, even if you fail sometimes, overall you are doing better.”
Of course, taking away the quality assurance jobs meant, well, taking away jobs. “Some of the engineers really cared about system performance types of things,” Maimon explained, “so they joined related teams. Some started working on automation [for testing], and they thought that was great—that they didn’t have to do the same thing over and over. And others left.”
Now, a year after the change, “It’s 100 percent working,” Maimon says. “It’s amazing. Even people who didn’t think it could ever work now think it’s great, and we are applying it to everything we do in the company.”
Using Instagram to Teach JavaScript
By Tekla Perry
Photo: Vidcode
Instagram. It’s the go-to social network for teenage girls today. (They aren’t using Facebook; that’s for their parents.) So, thought Alexandra Diracles, founder of Vidcode, that’s where you go if you want to get more girls interested in computer science by introducing them to coding.
Vidcode, which graduated from the Intel Education Accelerator last week, has built a curriculum and tools for teaching JavaScript using Instagram photos. A user uploads images and videos from Instagram, and, using JavaScript, turns it into video greeting cards, music videos, and other projects that they can share online. Beginners drag and drop basic chunks of code, then edit it to change parameters; they evolve to writing their own routines. Vidcode is reaching out to school districts, governments, and groups like the Girl Scouts, and plans to charge $10 to $12 per user per year for the curriculum. The system is already online, with a sample session available for free. Diracles says the company is working on expanding its tools to allow users to edit videos for 3D and Virtual Reality viewing.
0 comments:
Post a Comment