Skip to content

July 13 2013

July 15, 2013

13July2013

Newswire

 

Snowden affair clouds U.S. attempts to press China to curb cyber theft

Reuters

Mon, Jul 8 2013

By Paul Eckert

 

WASHINGTON (Reuters) – Revelations by former U.S. spy agency contractor Edward Snowden will make it harder for the United States to confront China at talks this week over the alleged cyber theft of trade secrets worth hundreds of billions of dollars each year.

Snowden’s disclosures of American electronic surveillance around the world give China an argument to counter U.S. complaints that it steals private intellectual property (IP) from U.S. companies and research centers.

Cyber security is at the center of high-level meetings between the two countries in Washington that will show whether a positive tone struck by President Barack Obama and new Chinese President Xi Jinping at a summit last month can translate into cooperation on difficult issues.

Top U.S. officials from Obama down have long tried to convince China to recognize a clear line between the kind of cyber espionage by spy agencies revealed by Snowden and the stealing of technology.

“This Snowden thing has muddied the waters in a terrible way,” said James McGregor, author of a book on China’s authoritarian capitalism and industrial policy.

“China would rather have the waters muddy, because they can say ‘You do it. We do it. What’s the big deal?’ and the cyber theft against companies will go on and on,” he said by telephone from China, where he is senior counselor for APCO Worldwide, a U.S. business consultancy.

Treasury Secretary Jack Lew said last week that U.S. officials will press China at the talks on cyber theft, a problem he described as “just different from other kinds of issues in the cyber area.

Many countries spy on each other, but U.S. officials say China is unique in the amount of state-sponsored IP theft it carries out as it tries to catch up with the United States in economic power and technological prowess.

Last week the U.S. Department of Justice charged Chinese wind turbine maker Sinovel Wind Group Co and two of its employees with stealing software source coding from U.S.-based AMSC in an alleged theft worth $800 million.

The U.S. Chamber of Commerce hopes “to see a clear indication that China recognizes thefts of trade secrets, whether by cyber or other means, is stealing property and will bring the full force of its laws to curb this,” said Jeremie Waterman, the group’s senior director for Greater China.

Beijing regularly parries complaints about Chinese hacking into the computers of U.S. businesses by saying that China is itself a major victim of cyber espionage. Chinese officials have dismissed as unconvincing recent U.S. official and private-sector reports attributing large-scale hacking of American networks to China.

China’s official Xinhua news agency last month said the Snowden case showed the United States was “the biggest villain in our age” and a hypocrite for complaining about Chinese cyber attacks.

 

China’s stance appears to be bolstered by Snowden’s revelations of widespread surveillance by the National Security Agency and his assertion that the agency hacked into critical network infrastructure at universities in China and in Hong Kong.

Snowden first fled to Hong Kong before his leaks to newspapers became public last month, and has subsequently gone to Moscow. He is believed to be holed up in the transit area of the city’s Sheremetyevo International Airport and has been trying to find a country that would give him sanctuary.

 

‘OUT OF BOUNDS’ SPYING

Now in their fifth year, the annual U.S.-Chinese talks, known as the Strategic and Economic Dialogue, will cover topics from U.S. concerns about North Korea’s nuclear weapons and expanding U.S.-China military ties to climate change and access to Chinese financial markets.

China’s exchange-rate policy is on the agenda, although it has receded as a issue with the gradual strengthening of the yuan and a reduction of huge current account imbalances.

This year Secretary of State John Kerry and Lew host Chinese State Councilor Yang Jiechi and Vice Premier Wang Yang for the first S&ED session since China’s once-a-decade leadership change in March when Xi took over.

The meetings follow Obama’s summit last month with Xi in California, where the two men developed what aides called a productive relationship. Nevertheless, Obama demanded Chinese action to halt what he called “out of bounds” cyber spying.

Civilian and military officials from the two countries discussed international law and practices in cyberspace at low-level talks on Monday. Cyber security is due to come up at other meetings throughout the week that will also likely address U.S. accusations that Beijing gained access electronically to Pentagon weapons designs.

IP theft costs U.S. businesses $320 billion a year, equal to the annual worth of U.S. exports to Asia, authors of a recent report say.

A bipartisan group of high-ranking former U.S. officials known as the Commission on the Theft of American Intellectual Property said in a May report that China accounts for between 50 percent and 80 percent of IP theft suffered by U.S. firms.

Cyber theft of industrial designs, business strategies and trade secrets is only a portion of IP pilfering.

IP theft more commonly involves “planted employees, bribed employees, employees who were appealed to on the basis of nationalism and all the traditional means of espionage, often accompanied by cyber,” said Richard Ellings, president of the National Bureau of Asian Research think tank, who co-wrote the report.

The U.S. District Court in Manhattan charged three New York University researchers in May with conspiring to take bribes from Chinese medical and research outfits for details about NYU research into magnetic resonance imaging technology.

Arrests by U.S. Immigration and Customs Enforcement and the Homeland Security Department for IP infringements rose 159 percent and indictments increased 264 percent from 2009-13, according to a report released in June by the U.S. Intellectual Property Enforcement Coordinator.

The Commission on the Theft of American Intellectual Property called for tough penalties including banking sanctions, bans on imports and blacklisting in U.S. financial markets.

 

 

Special Report: Cyber Priorities

Snowden Incident Returns Spotlight to Employee Danger

 

Defense News

Jul. 9, 2013 – 06:00AM |

By ZACHARY FRYER-BIGGS

http://www.defensenews.com/article/20130709/DEFREG02/307090010/

 

WASHINGTON — Edward Snowden, the leaker currently stuck in Russia who disclosed a wide range of secrets about US government surveillance and spying, has changed the conversation about cybersecurity. Not because of the documents he released, but as a reminder of the vulnerability organizations have to the threat of insiders with access to large swathes of information and system components.

It’s a lesson that was the talk of the cyber community following the WikiLeaks disclosures through the alleged actions of Bradley Manning that faded as experts began to focus on the growing threat of foreign governments, particu­larly China. It is back in vogue because of the volume and sensitivity of information Snowden has made public.

Some of the fallout from the Manning case, such as the banning of thumb drives and other external media from sensitive systems, has been walked back in some instances in the name of practicality. One of the problems, as is the case with any security issue, is you can’t make a network truly safe from an insider.

“It’s akin almost to insider attacks in Afghanistan,” Army Gen. Martin Dempsey, chairman of the US Joint Chiefs of Staff, said during a late June speech. “Well, the answer is that you can’t prevent it. You can mitigate the risk, and what I’d like you to take away from this conversation about the incident with Snowden is you can’t stop someone from breaking the law 100 percent of the time. You just can’t stop that from happening.”

Dempsey did, however, suggest steps to reduce the threat of insiders to Defense Department networks, including cutting the number of people in positions like Snowden’s.

“I think systems administrators is the right place to begin to clean this up because they have such ubiquitous access, and that’s how he ended up doing what he did,” he said. “We really need to take advantage of thin client and cloud technology, to dramatically reduce the number of systems administrators that we have managing programs, which will make it both more effective and safer.”

That approach carries risk because fewer individuals will have access concentrated in their hands, said Jeff Moulton, director of information operations at Georgia Tech Research Institute.

“What they’ve done now is rather than mitigating the threat, they’ve increased the likelihood of a catastrophic impact from a threat,” he said. “It’s not going to help. It introduces other problems, like the broader access of the cloud.”

One idea suggested by several cyber experts, including Moulton, is to adopt nuclear launch security as a guide. When it comes to the use of nuclear weapons, two separate individuals have to provide authentication before a weapon can be used. Not only does this prevent accidents, but it guarantees that a second person will be monitoring the activity of the first.

In the cyber realm, this could be achieved by requiring two people to provide their security credentials before either could access certain kinds of documents or segments of the network control system.

“Is it time consuming? Perhaps,” Moulton said. “But what’s more time consuming, doing this or armchair quarterbacking?”

Still, there will always be a residual threat from insiders, which is why deterrence is key, said Ian Wallace, a visiting fellow with the Brookings Institution and a former official with the British Ministry of Defence.

“The insider threat will always exist, and it will be next to impossible to stop it completely,” Wallace said. “But there are also plenty of ways in which that can be deterred. Not the least of those is the traditional deterrent of getting caught and prosecuted, something which is even more likely with the emergence of companies doing big data analysis of behavior on their own systems.”

Wallace cautioned that all of this attention on the insider threat may be misguided. Statistically, insider attacks are exceedingly rare, even if the data that is lost or the risk to systems from a determined insider is significant.

“All of the evidence that I have heard from the best cybersecurity firms suggests that the main threat is still the remote threat, for three compelling reasons: the risk of being caught is much less, it is much more scalable, and at present it is still, sadly, relatively easy for a sophisticated and determined intruder to get into all but the best protected systems,” Wallace said.

In the hunt for solutions to the insider threat, one of the big questions is how to detect intent from an employee ahead of a problem. In much the same way that concerns have surfaced about what radicalized the Boston bombing suspects and whether it could have been detected earlier, experts are studying how to discover the intentions of insider threats sooner.

That can take the form of such mundane facts as the speed at which an employee types. Changes in the rate of typing can indicate mood, a tip that further inquiry might be needed.

But to gain that type of data, a certain degree of invasiveness is required, and some superficial profiling of behavior is employed.

That creates all kinds of legal and ethical questions but may be a necessity for large organizations with many people to monitor, Moulton said.

“You can’t monitor everybody all the time,” he said. “Look at what the casinos do. They profile, but that’s a really difficult word. Are we prepared to profile?”

Dempsey emphasized that some actions would be taken to improve the system, but he described a certain degree of risk acceptance.

“You can certainly increase the scrutiny in terms of their background investigations, you can reduce the number of them you get, there are different degrees of oversight in place,” he said. “But at some point, if somebody is going to break the law and commit an act of treason, I don’t know what he’ll eventually be charged with or espionage, they’re going to be able to do that.”

 

 

DOD building its own secure 4G wireless network

GCN.com

By Kathleen Hickey

Jul 03, 2013

http://gcn.com/Articles/2013/07/03/DOD-secure-4G-wireless-network.aspx?s=security_110713&admgarea=TC_SecCybersSec&p=1

 

The Defense Department expects to have its own secure 4G wireless network up and running by the middle of next year, hosting a variety of iPhones, iPads and Android devices.

The network is part of DOD’s four-year, $23 billion dollar investment in cybersecurity, which also calls for hiring an additional 4,000 people for its cyber workforce, establishing common standards and improving coordination in investing and managing cyber resources, Gen. Martin Dempsey, chairman of the U.S. Joint Chiefs of Staff, said in a recent speech given at the Brookings Institution.

Dempsey said he had a secure mobile phone that “would make both Batman and James Bond jealous.”

Dempsey also spoke about creating a federal app store using off-the-shelf technology to “allow any DOD user to write and share phone and tablet apps.” On June 28, the Defense Information Systems Agency announced it awarded Digital Management, Inc. a $16 million contract to build the DOD’s first enterprisewide mobile application store and mobile device management system.

The secure 4G network is part of the DOD’s Joint Information Environment initiative to consolidate its 15,000 networks into a cloud environment.

“The new Joint Information Environment will deepen collaboration across the services and mission areas. It will also be significantly more secure, helping ensure the integrity of our battle systems in the face of disruption,” said Dempsey.

A few news outlets, such as TechInvestorNews, speculated whether the network was a ploy by DOD to exclude itself from the National Security Agency’s surveillance program, since its calls would not go through Verizon or other commercial carriers from which NSA collects metadata.

But the network could also just be a sign of DOD recognizing the growing importance of mobile computing. The military has long had its own non-classified and classified IP networks — NIPRnet and SIPRnet. As it uses more smart phones and tablets, that approach to security is extending to mobile.

Since Dempsey was appointed chairman in 2011, critical infrastructure attacks have increased 17-fold, he said at Brookings, although he did not specify the exact number of attacks, nor how many occurred prior to his taking office.

“Cyber has escalated from an issue of moderate concern to one of the most serious threats to our national security,” he said. And in addition to military systems, securing civilian infrastructure and businesses, such as those in the banking, chemical, electrical, water and transport sectors, is vitally important.

“Although we have made significant progress embracing cyber within the military, our nation’s effort to protect civilian critical infrastructure is lagging,” Dempsey said. “Too few companies have invested adequately in cybersecurity.”

“One of the most important ways we can strengthen cybersecurity across the private sector is by sharing threat information. Right now, threat information primarily runs in one direction — from the government to operators of critical infrastructure. Very little information flows back to the government,” he said. “This must change. We can’t stop an attack we can’t see.”

 

Commentary: Can Driverless Cars Save the Postal Service?

http://www.nextgov.com/emerging-tech/2013/07/commentary-can-driverless-cars-save-postal-service/66113/

By Samra Kasim and Matt Caccavale

July 5, 2013

Ding! That sound could soon be the USPS app alerting you to an imminent delivery, after which a driverless Postal Service vehicle arrives at your door and a robotic arm delivers your package.

While this may sound like science fiction, driverless vehicles will be coming to streets near you sooner than you may think. Sixteen states already have introduced driverless vehicle legislation and California, Nevada, Florida, and the District of Columbia have enacted laws allowing driverless vehicles on their roads. Sergey Brin, co-founder of Google and a driverless vehicle advocate, forecasts fully autonomous vehicles will be available for sale in five years.

Driverless vehicles have the potential to transform many enterprises focused on transporting goods. The Postal Service’s fleet of 215,000 vehicles traveled over 1.3 billion miles in 2012, roughly equivalent to circumnavigating the globe 172 times every business day. Driverless vehicles could reduce operating costs through increased safety, fuel efficiency, and new business models. After posting a quarterly loss of $1.9 billion in May, it’s time for USPS to explore reinvention.

Think about what a day in the life of a USPS driverless vehicle might look like:

12:18 a.m. The latest software package with updated mapping information and the day’s optimized delivery route is downloaded directly from fleet headquarters.

12:30 a.m. The vehicle begins delivery on its suburban route — the pre-determined optimal time for mail delivery on that particular day.

5:00 a.m. A local bakery’s two-hour reservation through USPS’s CloudCar program begins and the vehicle delivers bread to grocers around town. Since the bakery owner no longer has to maintain his own fleet of delivery trucks, he can hire two more bakers and double production.  

7:22 a.m. The vehicle stops at a full service gas station, refuels and reports a maintenance diagnostic assessment to fleet headquarters, allowing USPS to forecast maintenance requirements and plan accordingly.

11:13 a.m. After completing initial deliveries, the car is identified as available. Just then, a business executive pulls up the USPS mobile app on her phone, checks-in at her current location and orders a rush delivery of a time sensitive document.

3:15 p.m. While en route, the car’s sensors detect a large pothole, triggering an automatic report to the local transportation department with geotagged images of the hazard.

4:18 p.m. A businessman suddenly remembers that today is his anniversary. He places an order at a local florist, who has an urgent delivery contract with USPS’s new dynamic pricing system. The vehicle stops at the florist and is then routed to the spouse’s residence.

7:14 p.m. After completing its custom delivery orders and returning to the USPS regional warehouse, the vehicle sends its daily diagnostic report to fleet headquarters, and begins the next round of deliveries.

While this is only a thought experiment, the potential for new operating models and cost savings is very real.

Removing the driver from a vehicle enables it to be used around-the-clock. Routes could be designed around optimal traffic patterns and delivery needs. Driverless vehicles also could be used as a shared service with other businesses and government agencies leasing time when the vehicles are available, similar to the Uber Taxi model. With its significant vehicle fleet and 42,000 ZIP code reach, the Postal Service is well positioned to pilot new service models. It could, for instance, coordinate with auto manufacturers and the State of California to test the readiness of its highways for driverless cars.

Driverless vehicles also have the potential to reduce vehicle operating costs. In 2012, Google reported that after driving 300,000 miles, its driverless cars were not involved in any accidents. Computer control of cars mitigates against human error, such as fatigue or distraction, leading to greater safety. Vehicle accidents and wear-and-tear create significant operating costs for large enterprises like USPS. In FY 2011 alone, USPS had over 20,000 motor vehicles accidents. According to OSHA, the average vehicle crash costs an employer $16,500. The average cost skyrockets to $74,000 when an employee has an on-the-job crash resulting in injury. With fewer vehicle-related accidents, USPS could see substantial cost savings.

As gas prices continue to climb, fuel is another major cost for large fleet operators. The Postal Service spent nearly $500 million in 2011 and required $614 million in maintenance. With an average vehicle age of 16 years, fuel and maintenance costs will continue to climb. A Columbia University study identified that “cars simply managing their own speed would increase efficiency by an appreciable 43 percent.”  Further, the study estimated that once there are more driverless vehicles on the road that are able to platoon with each other, energy savings may jump to 273 percent.

Federal agencies have long promoted innovative technologies, from GPS to the Internet. As the largest purchaser of goods and services and operator of the largest vehicle fleet in the world, the federal government and USPS have the potential to usher in the driverless car revolution.

 

Sources: DoD Considers 3 Options for JIEDDO

Defense News

Jul. 6, 2013 – 06:00AM |

By MARCUS WEISGERBER     

WASHINGTON — Senior US defense officials are preparing to determine the future of a powerful, high-profile Pentagon organization that has spent nearly a decade developing equipment, tactics and training to defeat roadside bombs.

Last month, House lawmakers included a provision in their version of the 2014 defense authorization bill that requires the Defense Department to provide a report on the future of the Joint Improvised Explosive Device Defeat Organization (JIEDDO).

At a time when the Pentagon is facing hundreds of billions of dollars in spending cuts over the next decade, senior military leadership is said to be considering three options for restructuring JIEDDO: eliminate the organization; break up its duties among the military services through a process called disaggregation; or restructure JIEDDO into a smaller office within the Office of the Secretary of Defense (OSD).

In March 2011, then-Defense Secretary Robert Gates called for the elimination of the JIEDDO director billet, a position held by four different three-star generals since 2008. The elimination would be “based upon deployment of forces and IED threat,” Gates wrote in a memo at the time.

But supporters of JIEDDO said the counter-IED mission must be preserved through the Quadrennial Defense Review, which lays out future US military strategy and is due to Congress early next year. These supporters point to recent intelligence assessments that say terrorist networks will continue to use IEDs against the United States and its allies.

“We have to realize that the IED is part of our operational environment now,” said retired Army Command Sgt. Maj.Todd Burnett, a former senior enlisted adviser to JIEDDO.

A May Center for Naval Analyses assessment of the “post-Afghanistan IED threat” found the IED will likely persist in the coming years.

With that in mind, JIEDDO supporters argue that the third option — creating a smaller office within OSD — would be best.

“DoD needs a small, scalable, agile, OSD-level organization with special authorities, ramp-up ability and flexible funding to implement and synchronize … enduring counter-IED capabilities,” a defense official said.

Since its birth in 2006, JIEDDO has spent about $20 billion, according to budget documents. Spending peaked near $4 billion in 2008, around the time of the surge in Iraq. Since then, spending has declined to about $2 billion. A scaled-down counter-IED organization would likely cost about one-fourth of that, a defense official said.

Officials close to JIEDDO said the office has already cut costs, and they point to the cancellation this year of a number of underperforming programs.

These cancellations have allowed the office to reinvest more than $289 million in training and to purchase reconnaissance robots and bomb-detection equipment. The JIEDDO office is expected to cut 22 percent of its staff by September, a reduction expected to save $163 million.

The majority of the money spent by JIEDDO has gone toward what it calls defeating the device, or purchasing systems and equipment to detect or protect soldiers from IEDs. This includes purchases of robots, electronic jammers, vehicles and even aerostats.

The equipment includes both US and foreign-made systems, such as more than 800 British-built Self-Protection Adaptive Roller Kits, giant rollers that can be mounted on vehicles to detect roadside bombs

The rest of the funding has gone toward intelligence used to go after IED networks and training equipment.

 

The Options on the Table

In January, the Joint Requirements Oversight Council, a panel that vets military requirements, said the Pentagon must maintain counter-IED capabilities, including the ability to identify threat networks that employ or facilitate IEDs, detect bombs and components, prevent or neutralize bombs, mitigate explosive device efforts, distribute bomb-related data across the the community of interest and train personnel in counter-IED capabilities.

Since then, three options have emerged as likely courses of action, sources say.

The first — eliminating JIEDDO and its mission — is not likely, a defense official said. The two more likely courses of action are scaling down the existing organization or delegating the training and equipping mission to the services through disaggregation.

If the disaggregation option is chosen, many of JIEDDO’s components could be split among the services, with acquisition authority most likely going to the Army, the official said.

JIEDDO reports to OSD and has special acquisition authority, allowing decisions and purchases to move quicker.

Through disaggregation, each of the services would likely be responsible for its own training, which supporters of JIEDDO said means different methods and equipment might be used.

Also unclear is how the intelligence apparatus within the organization would be restructured.

The other option is consolidating JIEDDO into a smaller OSD-level organization. An organization under this framework would be best equipped to rapidly procure counter-IED equipment, officials said. Special acquisition authority used by JIEDDO could be applied to this organization, allowing it to field equipment, quicker.

JIEDDO’s goal is to field what it calls capabilities in four to 24 months. After that time frame, the initiatives typically become official programs of record or terminated.

A review of 132 initiatives deployed showed that 93 — with a total price tag of $5.9 billion — were proved “operationally effective.” An additional 18, costing $900 million, were “operationally effective with some limitations in capability.” An additional 21 — totaling $400 million — were “not operationally proven,” or lacked evaluation information.

A key aspect of JIEDDO likely to be retained in a consolidated organization is the Counter-IED Operations/Intelligence Center (COIC). The center provides operational intelligence and analysis on threat networks to commanders in the field by fusing more than six dozen data sources.

The COIC also regularly interacts with more than two dozen US government intelligence agencies and international partners, including Canada, the UK, Australia and NATO.

 

An International Problem

IEDs are seen as a threat globally, not just in places like Iraq and Afghanistan. Since January 2011, more than 17,000 IED “events” have occurred in 123 countries, according to David Small, a JIEDDO spokesman. Outside Afghanistan, there are an average of 700 IED events each month.

Between December 2012 and May, Iraq experienced 3,352 incidents, the most of any country other than Afghanistan. Colombia experienced 1,005 during that period, with Pakistan third at 883. Syria, which has been in the midst of a civil war, has experienced 382 IED incidents.

In May, JIEDDO signed an agreement with Pakistan to minimize the IED threat. The arrangement allows sharing of information, including tactics, finding of IED incidents, lessons learned, information about IED financiers and information about the flow of IED materials.

Joe Gould contributed to this report.

 

 

Activity-Based Intelligence Uses Metadata to Map Adversary Networks

Defense News

Jul. 8, 2013 – 02:59PM |

By Gabriel Miller     

 

Few outside the intelligence community had heard of activity-based intelligence until December, when the National Geospatial Intelligence Agency awarded BAE Systems $60 million to develop products based on this newish methodology. But ABI, which focuses not on specific targets but on events, movements and transactions in a given area, is rapidly emerging as a powerful tool for understanding adversary networks and solving quandaries presented by asymmetrical warfare and big data.

Indeed, ABI is the type of intelligence tool that could be applied to the vast wash of metadata and internet transactions gathered by the NSA programs that were disclosed in June by a whistle-blower.

In May, the U.S. Geospatial Intelligence Foundation’s Activity-Based Intelligence Working Group hosted a top-secret forum on ABI that drew representatives from the “big five” U.S. intelligence agencies.

At the SPIE 2013 Defense, Security + Sensing Symposium on May 1, NGA Director Letitia Long said the agency is using ABI to “identify patterns, trends, networks and relationships hidden within large data collections from multiple sources: full-motion video, multispectral imagery, infrared, radar, foundation data, as well as SIGINT, HUMINT and MASINT information.”

The technique appears to have emerged when special operators in Iraq and Afghanistan reached back to NGA analysts for help plugging gaps in tactical intelligence with information from national-level agencies. These analysts began compiling information from other intelligence disciplines — everything from signals intelligence and human intelligence to open sources and political reporting — and geotagging it all. The resulting database could be queried with new information and used to connect locations and establish a network.

This experience led to a series of seminal white papers published in 2010 and 2011 by the Office of the Undersecretary of Defense for Intelligence. The papers call ABI “a discipline of intelligence where the analysis and subsequent collection is focused on the activity and transactions associated with an entity, population, or area of interest.”

This focus on interactions is the fundamental difference between ABI and previous efforts to integrate different types of intelligence, which were often confined to a single agency and aimed at a specific target.

“When we are target-based, we focus on collecting the target and, too often, we are biased toward what we know and not looking for the unknown,” NGA’s Dave Gauthier said last year at GEOINT 2012. Gauthier, who handles strategic capabilities in the agency’s Office of Special Programs, called ABI “a rich new data source for observing the world and the connectedness between objects and entities in the world.”

ABI attempts to meet two challenges with traditional intelligence-gathering. First, there are no clear signatures for and no doctrine governing the activities of nonstate actors and insurgents who have emerged as the most important threats to U.S. national security. Second, the volume of big data has become “staggering,” in Gauthier’s words. Take, for example, the recent bombing in Boston: There was a massive amount of surveillance imagery available, but analysts initially had no idea whom they were looking for, and moreover, the suspects turned out to look little different from thousands of other spectators on hand.

 

“ABI came out of the realization that the scheduled, targeted, one-thing-at-a-time, stove-piped analysis and collection paradigm was not relevant to non-nation-state and emergent threats,” said Patrick Biltgen, a senior engineer in the intelligence and security sector at BAE Systems. “We are breaking this one-thing-after-another paradigm because information is flowing … all the time and we don’t know what to do with it because if you’ve stopped to try and collect it, you’ve missed everything else that’s coming.”

 

NEW METHODOLOGY

Though the USD(I) white papers call ABI a new discipline, many prefer to think of it more as a methodology with several components.

The first is the constant collection of data on activities in a given area, then storing it in a database for later metadata searches. The NGA’s Long recently said the agency is working to create a “model that allows us to ‘georeference’ all of the data we collect persistently — over a long period of time,” one that allows “analysts to identify and evaluate data down to the smallest available object or entity.”

The second is the concept of “sequence neutrality,” also called “integration before analysis.”

“We collect stuff without knowing whether it’s going to be relevant or not. We may find the answer before we know the question,” said Gregory Treverton, who directs the Rand Center for Global Risk and Security. “It’s also not so driven by collection; the collection is just going to be there.”

The third is data neutrality — the idea that open-source information may be just as valuable as HUMINT or classified intelligence.

“Humans, unlike other entities, are inherently self-documenting. Simply being born or going to school, being employed, or traveling creates a vast amount of potentially useful data about an individual,” the white papers say. This tendency has exploded on the Internet, “where individuals and groups willingly provide volumes of data about themselves in real time — Twitter and social network forums like Facebook and LinkedIn are only a few examples of the massive amounts of unclassified data that is routinely indexed and discoverable.”

Finally, there is knowledge management, which covers everything from the technical architecture that makes integrated intelligence and information-sharing possible to the metadata tagging that allows analysts to discover data that may be important, but not linked spatially or temporally.

 

USAGE EXAMPLES

ABI products take the form of customizable Web-based interfaces that allow analysts to locate associations among data sets using metadata.

“You could call them Web services, apps, widgets, but they help analysts sift through large volumes of data,” said BAE Systems’ Biltgen.

These do not compete with giant systems like the armed services’ Distributed Common Ground Systems, end-to-end databases that connect thousands of users with intelligence information. Rather, they are generally designed to plug into DCGS, then help smaller working groups deal with specific problems.

“Really, what we’re doing is working with the metadata — the dots and the indexes and extracted ‘ABI things’ — to get those on the screen, whereas the large systems really manage streams of imagery for exploration,” Biltgen said. “We go, ‘Let’s take clip marks and the tags that come from exploited video streams and look at all of them at the same time without ever having to touch a frame of video.’ ”

He said the goal is to “precondition the data and make it easier for the analyst to correlate them, apply their cultural awareness and knowledge to them, and really put the thought muscle on the data after it’s been well conditioned.”

So what does ABI actually produce? One common format is activity layer plots. An analyst might, for example, place all available intelligence about an explosion of an improvised explosive device atop information about a kidnapping in the same area, then lay in data about the local bus line, the fruit market at the corner, or the local timber-smuggling operation.Once displayed, the information may overlap or intersect in interesting ways.

To date, ABI has primarily been used in the kinds of operations that have defined Iraq and Afghanistan: manhunting and uncovering insurgent networks. But because ABI is more a methodology than a discipline, and because the products that enable ABI are customizable, the intelligence community sees ABI applied to a broad range of problems.

“The immediate question is, can we expand it beyond counterterrorism and manhunting and the fight against terror?” Treverton said.

He suggested applications such as maritime domain awareness, in which signatures exist for Chinese frigates but not junks.

ABI can theoretically be brought to bear on any problem that might be aided by a “pattern of life” analysis, a prominent phrase in the white papers. In finance, for example, ABI might identify patterns left by a particular kind of criminal.

“You could use this in the insurance industry to try and understand the patterns of life of individuals that steal things from you and make false claims. We do some of that work today,” Biltgen said.

While ABI can help anticipate patterns, advocates don’t claim it can predict future behavior.

“I wouldn’t call it predictive,” Treverton said. “I wouldn’t call anything predictive. That’s asking way too much.”

Still, it may help officials anticipate threats by building a deep understanding of the networks that give rise to specific incidents.

 

POTENTIAL ROADBLOCKS

Two things could hinder ABI — one technical, one cultural.

It sounds relatively uncomplicated to develop a visual network, say, by tracing all of the tire tracks captured by wide-area motion video in a given area over a period of time. Origins and destinations become nodes, and hundreds or even thousands of tire tracks describe a network from which analysts can extract meaning. But the devil is in the details. For example, it is difficult to define a “vehicle stop” in an algorithm, much less assign meaning to it. Does a “stop” last five seconds or one minute?

“It sounds easy, until you touch the data. You realize that every proposition in that value chain has hidden complexity,” said Gary Condon, an intelligence expert at MIT’s Lincoln Lab, at GEOINT 2012.

The second set of issues are cultural. Even in the post-9/11 era, legal boundaries and security clearances can prevent the kind of data-sharing that makes ABI work. The quantity of publicly available information swells by the day, but the intelligence community still often prizes classified over open-source information. And just as complex: Some of that open-source intelligence raises privacy concerns when U.S. persons are involved.

That’s been at the heart of the outcry over the NSA’s Prism program and phone-record collection.

Still, top-level intelligence officials see ABI as a valuable new tool. Several senior officials from the Office of the Director of National Intelligence remarked on its growing importance at the U.S. Geospatial Intelligence Foundation forum in early May.
“The defense and intelligence worlds have undergone, and are still undergoing, a radical transformation since the events of 9/11. The Department of Defense and the Director of National Intelligence have made information sharing and efficiency priorities,” the spokesman said. “This will increase collaboration and coordination, which will have a multiplying effect on approaches such as ABI.”

 

 

Analysis: Policies and Opportunities That Will Shape Cybersecurity Spending

Special to Homeland Security Today

By: Stephanie Sullivan, immixGroup Inc.

07/08/2013 (11:16am)

http://www.hstoday.us/industry-news/general/single-article/analysis-policies-and-opportunities-that-will-shape-cybersecurity-spending/ec776ea67c3d7ae2d65377daccc49279.html


Editor’s Note: Homeland Security Today has partnered with immixGroup Inc. to bring you exclusive market insight and analysis.

In this installment, Stephanie Sullivan, Market Intelligence Consultant, offers a look at the major White House and Congressional efforts impacting cybersecurity programs throughout the federal government, as well as some of the main contracting opportunities on the cyber horizon.

————————————–


As cyber threats continue to dominate the headlines, it is important for the innovators in the government security market to understand how the legislative and executive branches are tackling cybersecurity and the potential ramifications of these efforts for industry.  


FY14 Legislation Impacts on Cyber


These are some of the several legislative directives that could impact the commercial-of-the-shelf (COTS) vendor community in FY14, and aim to encourage the adoption of cybersecurity best practices on a voluntary basis. The underlying motivation of these directives is to spur industry and government collaboration on information sharing and defending networks.


The framework proposes to allow intelligence gathering on cyber-attacks and cyber threats, as well as address network security gaps in critical components of U.S. infrastructure, including banking, utility, and transportation networks.

NIST in collaboration with GSA, DOD, and DHS released a Request for Information (RFI) last February in order to gather feedback from industry and relevant stakeholders regarding the development of the framework, and has been holding a series of workshops to identify priority elements the framework must address.

An initial draft of the framework was publicly released on July 1st with revisions expected to be made following the 3rd Cybersecurity Framework Workshop being held on July 10-12th in San Diego, and will be expanded and refined leading into the fourth workshop anticipated to be held in September. Additional framework milestones include the release of the preliminary version due in October; with a final version expected in February 2014.

Keep an eye on this – participating in stakeholder engagements and familiarizing yourself with the draft guidelines will be critical to all COTS vendors, because you need to understand how your products and solutions can enhance the framework and meet these ‘voluntary’ but critical security needs. After all, the end goal of these working groups will be to eventually bake cybersecurity standards into federal acquisitions to ensure cyber protection.

  • The Presidential Policy Directive – 21 or PPD 21 on Critical Infrastructure and Security Resilience is serving as a replacement and update to 2003 Homeland Security PPD – 7, and was also issued on February 12, 2013 as a complement to the Cybersecurity Executive Order.  PPD – 21 defines what critical infrastructure is and encourages the Federal Government to strengthen the security and resilience of its own critical infrastructure, which is outlined in the directives three strategic goals. It also defines sector-specific agencies (SSAs) for critical infrastructure segments, and mandates information sharing and cooperation between the SSAs, state & local organizations, and international partners.  


The new policy establishes “national critical infrastructure centers” in the physical and cyber space designed to promote information sharing and collaboration, as well as ordering the State Department to work with DHS on issues of international interdependencies and multi-national ownership, and growing concerns of the global economy. However, some speculate that not enough has changed from the former Presidential Directive to be truly noteworthy.

  • The Cyber Intelligence Sharing and Protection Act (CISPA) is a bill designed to encourage voluntary information sharing between private companies and the government in order to gain information surrounding incoming cyber threats. In a perfect scenario a private company, like an Amazon or Google, would identify unusual network activity that may suggest a cyber attack and alert the government, or if the government detected a threat to a private business network they would share their findings.


The bill was originally introduced into Congress last year, but privacy concerns proved to be a major roadblock, and the bill didn’t make it to the Senate floor. The bill could meet the same fate this year, even after it was passed by the House of Representatives on April 18, 2013. The NSA PRISM program has halted any movement regarding cybersecurity legislation until at least September, if not further down the road due to increased scrutiny of private information sharing.


One of the provisions of note calls for mandatory reporting requirements by defense contractors when there has been a successful cyber penetration. Additionally, the NDAA also calls for improved monitoring and alert technologies to detect and identify cybersecurity threats from both external sources and insider threats. The NDAA also contains a provision aimed at addressing longstanding concerns over elements of the Pentagon’s supply chain. The NDAA hints that statutory requirements to address this problem may be down the road. DOD is encouraged to cooperate with industry.   



FY14 Federal IT Sales Opportunities in Cyber


The federal government plans to spend about $13 billion in FY14. This reflects the fact that cybersecurity continues to be a strategic concern for federal agencies. Just as important, cybersecurity will benefit from bipartisan reluctance to curb spending in this high profile area. Fiscal constraints do exist, however, and agencies will have to be circumspect in how they earmark money. The following are a small selection of programs with significant cybersecurity requirements and large allocations for new starts. It is important to understand which programs have funding and map your solutions to these programs.



FY14 Opportunities: Civilian


Funded cybersecurity opportunities within the civilian arena can be found in almost every Executive Branch agency. Below are the top three civilian programs by Development, Modernization and Enhancement (DME) funding – money used to buy new products.

  • Department of Homeland Security (DHS) National Protection and Programs Directorate (NPPD) – The Continuous Diagnostics and Mitigation (CDM) program is the agency’s largest cybersecurity investment dedicated to continuous monitoring, diagnosis, and mitigation activities to strengthen the security posture across federal .gov domain. This investment will assist DHS in overseeing the procurement, operations and maintenance of sensors and dashboards deployed to federal agencies.
    • FY14 DME IT spend for CDM is $121.4 million
  • Department of Commerce (United States Patent and Trademark Office (USPTO)) – Network and Security Infrastructure investment describes the IT operations and services provided to the USPTO and external customers by the OCIO Enhancements and upgrades of this IT infrastructure will include firewall enhancements, antivirus software, network security, data protection and compliance too.
    • FY14 DME IT spend for NSI is $89.5 million
  • DHS (NPPD) – The National Cyber Security Division, through its National Cybersecurity Protection System (NCPS), which is operationally known as ‘Einstein’, protects the Federal civilian departments and agencies IT infrastructure from cyber threats. Potential FY14 requirements for this program could include: intrusion prevention, intrusion detection, and advanced cyber analytics.
    • FY14 DME IT spend for NCPS is $72 million


FY14 Opportunities: Defense


Generally speaking, cybersecurity opportunities within the Department of Defense can be found within major network and infrastructure programs. Below are the top three defense programs by Development, Modernization and Enhancement (DME) funding – money used to buy new products.

  • Warfighter Information Network Tactical System Increment (WIN-T): High speed, high capacity tactical communications network serving as the Army’s cornerstone tactical communications system through 2027. Developed as a secure network for video, data, and imagery linking mobile warfighters in the field with the Global Information Grid. Potential FY14 procurements include firewall enhancements, intrusion protection and detection, continuous monitoring, and encryption.
    • FY14 DME IT spend for WIN-T is $815.4 million
  • Next Generation Enterprise Network (NGEN): An enterprise network which will replace the largest intranet in the world, the Navy Marine Corps Intranet, providing secure, net-centric data and services to Navy and Marine Corps personnel. NGEN forms the foundation for the Department of Navy’s future Naval Network Environment. HP was recently awarded the contract potentially worth up to $3.5 billion. The entire gamut of information assurance requirements are at play here, specifically due to the high reliance on cloud technology that NGEN will require.
    • FY14 DME IT spend for NGEN is $195.05 million
  • Consolidated Afloat Networks Enterprise Services (CANES):  Consolidates the Navy’s multiple afloat networks into one network. CANES replaces these existing networks with new infrastructure for applications, systems, and services and will improve interoperability along the way. The RFP is currently out with an award expected this winter.
    • FY14 DME IT spend for CANES is $195.1 million

 
 

About immixGroup Inc.

Founded in 1997, immixGroup® is a fast-growing company and a recognized leader in the public sector technology marketplace. immixGroup delivers a unique combination of services for software and hardware manufacturers, their channel partners, and government agencies at the federal, state, and local levels. immixGroup is headquartered in McLean, Virginia, close to Washington, DC and near the epicenter of the government IT community.

 

 

Darpa Refocuses Hypersonics Research On Tactical Missions

By Graham Warwick

Source: Aviation Week & Space Technology

http://www.aviationweek.com/Article.aspx?id=/article-xml/AW_07_08_2013_p24-593534.xml#

July 08, 2013

 

For the Pentagon’s advanced research agency, blazing a trail in hypersonics has proved problematic. Now a decade-long program to demonstrate technology for prompt global strike is being wound down, with some hard lessons learned but no flight-test successes.

In its place, the U.S. Defense Advanced Research Projects Agency (Darpa) plans to switch its focus to shorter, tactical ranges and launch a hypersonics “initiative” to include flight demonstrations of an air-breathing cruise missile and unpowered boost-glide weapon. If approved, the demos could be conducted jointly with the U.S. Air Force, which is eager to follow the success of its X-51A scramjet demonstrator with a high-speed strike weapon program.

Darpa’s original plan for its Integrated Hypersonics (IH) project was to begin with a third attempt to fly the Lockheed Martin Skunk Works-designed HTV-2 unmanned hypersonic glider, after the first two launches in 2010 and 2011 failed just minutes into their Mach 20 flights across the Pacific. This was to be followed by a more capable Hypersonic X-plane that would have pushed performance even further.

The original plan drew sharp criticism from Boeing executives, who viewed the proposed program as a thinly veiled excuse to fund a third flight of Lockheed’s dart-like HTV-2, which they consider unflyable. In laying out its revised program plan, Darpa makes no mention of any political lobbying against the HTV-2, but acknowledges a third flight would not make best use of its resources for hypersonic research.

Instead, as the Pentagon refocuses on China as a threat, Darpa is looking to work with the Air Force to demonstrate hypersonic weapons able to penetrate integrated air defenses and survive to strike targets swiftly, from a safe distance. Air-breathing and boost-glide weapons present challenges different to each other and to HTV-2, but the agency believes the lessons learned so far will prove valuable.

Key take-aways from HTV-2, says Darpa program manager Peter Erbland, include that the U.S. “has got kind of lean” in hypersonics competency as investment has declined from the heady days of the X-30 National Aero-Space Plane, and that “we have to be careful assuming our existing design paradigms are adequate” when developing a new class of hypersonic vehicles.

The HTV-2 sprung some surprises on its two failed flights, first with aerodynamics then with hot structures. Working out what happened “required us to mine all the competency in hypersonics that we have,” he says, and took a team assembled from government, the services, NASA, the Missile Defense Agency, industry and academia.

Erbland says the decision not to fly a third HTV-2 was influenced by “the substantial knowledge gained from the first two flights in the areas of greatest technical risk: the first flight in aerodynamics and flight performance; the second in the high-temperature load-bearing aeroshell.” Another factor was the technical value of a third flight relative to its cost. A third was the value of investing resources in HTV-2 versus other hypersonic demonstrations. “We’ve learned a lot; what is the value of other flights?” he asks.

While the Air Force Research Laboratory had two successes in four flights of the Mach 5, scramjet-powered Boeing X-51A, Darpa’s two HTV-2 flops followed three failures of the Mach 6, ramjet-powered Boeing HyFly missile demonstrator. But as is often the case in engineering, more is learned from failure than from success, and investigation of the HTV-2 incidents will result in more robust hypersonic design tools that increase the likelihood of future success, Erbland argues.

To ensure all lessons are absorbed, work on the HTV-2 will continue to early next summer “to capture technology lessons from the second flight, and improve design tools and methods for high-temperature composite aeroshells,” he says. Information from the post-flight investigation will be combined with additional ground testing to improve the models used to design load-bearing thermal structures—”how they heat up, the material properties, their uncertainties and variables, and how we use modeling and simulation to predict thermal stresses and responses.”

HTV-2 was intended to glide an extended distance at hypersonic speed—roughly 3,000 nm. in 20 min.—and required a slender vehicle with high lift-to-drag (L/D) ratio and a carbon-carbon structure to fly for a prolonged time at high temperatures. While Flight 1 in April 2010 failed when adverse yaw exceeded the vehicle’s control power, Flight 2 in August 2011 failed when the aeroshell began to degrade, causing aerodynamic upsets that ultimately triggered the flight-termination system.

“From the first flight it was clear our extrapolation of aero design methods was not adequate to predict behavior in flight,” says Erbland. “From the first to the second flights we redid the ground testing, and rebaselined the aero using new tools. On the second flight, the changes were completely effective, even in very adverse flight conditions.” But the modifications set up the HTV-2 for failure on the second flight.

“Changes to the trajectory made it a more severe aero-thermal environment than the first flight,” he says. “We have been able to reconstruct how it failed from the limited instrumentation, and the most probable cause is degradation of the structure. Thermal stresses led to failure.” While the vehicle retained its structural integrity, temperature gradients over small areas led to local material failures that caused the upsets.

“From the second flight, we learned a lesson on how to design refractory composites, to improve our understanding of how to model hot structures under thermal load,” says Erbland. “We learned a critical lesson about variability and uncertainty in material properties. That is why we are taking time to fund the remediation of our models to account for material and aero-thermal variability.”

HTV-2 is all that remains of the once-ambitious Falcon program (for Force Application and Launch from the Continental U.S.), started in 2003 with the goal of demonstrating technology for prompt global strike. Falcon had two elements, a hypersonic cruise vehicle (HCV) and a small launch vehicle (SLV) needed to boost the cruiser into a hypersonic glide. The SLV effort helped fund Space Exploration Technologies’ Falcon 1 booster, but the HCV went through several changes.

The original HTV-1 hypersonic test vehicle was abandoned in 2006 when the sharp-edged carbon-carbon aeroshell proved impossible to manufacture. Darpa and Lockheed proceeded with the easier-to-produce HTV-2, but then departed from the original unpowered HCV concept to propose an HTV-3X testbed, with turbojet/scramjet combined-cycle propulsion. Congress refused to fund the vehicle, dubbed Blackswift, and it was cancelled in 2008, leaving two HTV-2s as the remnants of Falcon.

Now Darpa is seeking to reinvent its hypersonics focus by moving away from the global- to the tactical-range mission. But while an air-breathing weapon can draw directly on the X-51, boost-glide over a 600-nm range is a different vehicle to the HTV-2. “To get the performance we need to look at high L/D with robust controllability. Thermal management is a different problem to HTV-2. We need robust energy management. And affordability.”

Boost-glide challenges include packaging a weapon for air and surface launch. “The mass and volume constraints are different. We had a very high fineness ratio for global strike; we will have to be very innovative to get high L/D without a high fineness ratio,” says Erbland. On the other hand, “trajectory insertion velocities are lower, and the booster problem could be more tractable. The problem with global range is that orbital launch systems with the energy needed are not designed to put a vehicle on an ideal start of glide, so we have to make them fly in ways they don’t want to,” he says.

But Darpa believes its HTV-2 experience will prove useful. “It provided critical technical knowledge to enable us to design a future boost-glide vehicle capable of prompt global strike. We made huge progress in understanding what we need to do in ground-test and flight-test to design the aerodynamics and hot structure,” Erbland says. “These are lessons we would not have learned without flight test, because of the limitations with ground test. We know going forward how to use modeling and simulation and ground test to give us more confidence that we can design a successful system.”

 

The State Of Broadband

Only by keeping pace with the latest in regulations, competition, and technology will companies rise above low-capacity, high-priced telecom networks.

By Jonathan Feldman, InformationWeek

July 10, 2013

URL: http://www.informationweek.com/infrastructure/management/the-state-of-broadband/231901478

 

We all remember the bad old days of having to load data into removable media in order to send it off to the data center. After all, it would have taken days to transmit the necessary data over slow telecom links.

 

Problem is, the bad old days aren’t over. Instead of shipping tapes to data centers, organizations now regularly ship entire hard drives to cloud providers. Despite tremendous advances in line speeds, it still can take a week or more to transmit very large data sets, even if your line speed is 10 Mbps. Of course, companies don’t regularly need to transfer terabytes of data over the internet, but the current level of sneakernet that’s necessary for the transfer of even a few hundred gigabytes seems a bit high for the 21st century.

The state of broadband matters to your organization. There’s been considerable consumer interest over the past several years, culminating in an FCC plan announced earlier this year to expand broadband coverage and speeds and promote competition. IT organizations can benefit by staying in touch with those regulatory issues, as well as taking advantage of new technology trends, such as wireless broadband, and partnering with alternative providers and municipal networks that buck the status quo. There are clearly risks in doing so, but taking no action almost guarantees that enterprise IT, with pockets of presence in rural and other nonurban areas, will continue to be held back by low-capacity, high-expense networks.

There are many reasons why the state of consumer broadband should matter to enterprise customers:

 

Problem With The Status Quo

In June, National Cable and Telecommunications Association CEO Kyle McSlarrow called America’s broadband deployment over the last 10 years “an unparalleled success story,” alluding to the rise of cable IP networks and faster and more extensive broadband in the consumer market. He’s right by some measures. Among the G7 countries, even though the U.S. is only No. 5 in broadband penetration (see chart on previous page), it’s been making headway. But when you look at average broadband prices worldwide, the U.S. doesn’t compare favorably–service in the United Kingdom, Sweden, France, Japan, Korea, Germany, and many other industrialized countries is cheaper, on average. And when you look at broadband subscribers per 100 inhabitants, the U.S. is ranked No. 22, slightly above the Organisation for Economic Co-operation and Development average but below the Scandinavian countries, Korea, Canada, France, the U.K., and others.

As with many things, where you stand depends upon where you sit. Tony Patti, CIO for S. Walter Packaging, a century-old manufacturing company in Philadelphia, says that even in the SOHO market, significant bandwidth is for sale relatively cheaply (see chart, below). “People always want more for less, but we’re at a remarkable and revolutionary time in the history of the convergence of computing and communications,” Patti says. But the two key questions are these: Are you in the provider’s service area; and if you are, does the actual speed match the advertised speed? In major markets, the answer is: probably. But talk to someone in smaller cities and rural America, and a different story emerges.

Kris Hoce, CEO of Pardee Hospital, a 200-bed facility in Hendersonville, N.C., says the hospital’s telecom lines are “stretched” today, and when the management team looks at tomorrow’s challenges, including telemedicine and telemetry, he gets even more concerned.

Until a second competitor, Morris Broadband, entered the market a year ago, the incumbent provider was Pardee’s only option. “You’ll take whatever capacity they give you, do it on their time schedule, and you’ll pay through the nose for it,” Hoce says. Since Morris Broadband’s entry, Pardee has realized a 10% to 15% reduction in telecom costs, though it can’t always get sufficient bandwidth, he says.

 

National Broadband Plan

The FCC’s 376-page National Broadband Plan, while a testament to the ability of federal bureaucracy to fill large amounts of paper, stands to benefit enterprise IT over the next few years in several areas, if the agency follows through.

First, the FCC says that it will be publishing market information on broadband pricing and competition. Will this be as useful as PriceWatch and eBay are in determining what you should pay? We’re not sure. But transparency itself should help: A market where all players know what everybody’s charging tends to be one where prices dip as low as possible.

Second, the FCC says it will make additional wireless spectrum available, and it will update its rules for backhaul spectrum. President Obama has thrown his weight behind this movement, directing the National Telecommunications and Information Administration–the folks behind the broadband stimulus–to help the FCC with a plan to make 500 MHz of spectrum available by the fourth quarter of this year.

It’s unclear what the licensing procedures will be, and for which portion of the additional spectrum. Our bet: some mix of unlicensed spectrum (like 2.4 GHz, a nightmare for IT departments that want to avoid interference), some fully licensed (like 800 MHz, whose paperwork can take months or years to get processed), and some “lightly licensed” (like the 3,650-MHz band that was allocated for WiMax in 2005, which requires two or more licensees in the same region to cooperate). When additional spectrum comes online, it should revitalize the market and create product innovations, which should make broadband wireless a bit less difficult for enterprises to deploy.

The FCC also plans to improve rights-of-way procedures. Power and other companies that own poles either have undocumented or onerous agreements for anyone wanting to attach to a pole or bridge. Streamlining and standardizing this process would be welcome news to telecom market entrants and user organizations that want to bypass the telecom providers. The unanswered question is, how will the FCC “encourage” rights-of-way owners to improve these procedures?

The National Broadband Plan also stipulates longer-term (within the next decade) goals, including that 100 million consumers are able to access affordable 100-Mbps actual download speeds, 50-Mbps upload–more than 10 times faster than what most U.S. consumers can now get. More interesting to enterprise IT, the plan outlines a goal of affordable access to 1-Gbps links for “anchor institutions”–hospitals, community centers, schools, and so on. As these institutions get affordable links, other large institutions, like big companies, will also get affordable high-speed links.

The FCC doesn’t always have the authority to say how these goals will be accomplished. But in the “implementation” chapter of the National Broadband Plan, it suggests who (including the FCC) should pursue them. For example, it recommends that the executive branch create a “broadband strategy council” consisting of advisers from the White House and its Office of Management and Budget, NTIA, FCC, and other agencies. The FCC also has committed to publishing an evaluation of its progress as part of its annual 706 report, named after section 706 of the Telecommunications Act of 1996. You can track 706 reports at http://www.fcc.gov/broadband/706.html.

 

Emerging Competition

Simplifying and streamlining the status quo won’t be as quick as we want it to be, but the situation isn’t bleak.

True, many of the wireline highways are owned by the same folks that own the off-ramps and have a big interest in resisting competition (the likes of AT&T, Verizon, and Qwest from the telco sector and Comcast, Time-Warner, and Cablevision from cable TV). But competition is in fact emerging.

 

Players like Morris Broadband serve relatively small and rural areas, catering to customers the larger players simply won’t touch. CenturyLink, a larger player, serves rural customers in 33 states. PAETEC competes in 84 of the top 100 areas, known as “metropolitan service areas,” which are anything but rural. Then there are municipal broadband projects such as LUS Fiber, a fiber-to-the home network started by the utility in Lafayette, La., that offers business services (10-Mbps symmetric) starting at $65 a month.

It’s hard to get information out of the incumbents–we tried, but folks like Verizon said that they don’t see how consumer broadband is related to serving enterprise customers. But the conventional wisdom is that they won’t serve an area unless they can get 25 potential customers per mile. Smaller players will look at areas with five or 10 potential customers per mile. Bottom line: Whenever competitors enter a market, prices fall. In a striking irony, the incumbents opposed to broadband regulation have lobbied local and state authorities to prevent broadband buildouts by municipal entities.

In addition to the wireline broadband alternatives, consider that the airwaves are wide open. Wireless ISPs like Clear and mobile phone and 3G data providers like T-Mobile and Verizon Wireless are interesting, but your bandwidth and reliability may vary when attempting to use their business-class SOHO service. That said, back in the day of the bag phone, nobody would rely on a cell phone for anything that was hugely important, but that didn’t keep IT organizations from playing with them in noncritical areas.

We’re also interested by the services offered by the likes of Texas-based ERF Wireless, which is completely focused on serving businesses, mainly banking and oil companies. ERF’s model: Customers invest in their own wireless infrastructure to backhaul to ERF’s network and then pay an ongoing port fee to access a secured backbone. CEO Dean Cubley says ERF’s banking customers pay about half of what they were paying to incumbent providers and have about a three-year payback on their capital investment.

Jacobson of North Carolina not-for-profit NCREN says the group’s successful BTOP round 1 application (awarded $28.2 million) came from efforts by the state’s office of economic recovery. It’s going to trickle up to the hospitals, too. “All the medical schools in the state are on NCREN today,” he says, and “the nonprofit hospitals will be eligible to interconnect to us as well.”

 

Welcome Back To Sneakerville

Some caution is necessary. There will be no shortage of poorly conceived broadband initiatives. Savvy IT organizations will stay close to operations, leaving the speculation to investors and economic development types.

Moving beyond sneakernet will require more than just fatter pipes. “Civil engineers discovered some time ago that building more lanes on highways does not really relieve traffic problems,” says Mark Butler, director of product marketing with Internet services company Internap. “Relief comes when you use the available capacity in a more efficient manner.”

So as you keep track of the legislation and other craziness coming out of Washington, keep pace with technical realities, lest you invest in higher-speed lines only to find that your use case isn’t quite as you had planned. George Bonser, a network operator with mobile messaging provider Seven, cites cases of companies that install high-speed lines and then discover they can’t get anywhere near their theoretical limit because of the software in use. It’s a complicated matter that deserves your attention in the same way that keeping track of broadband competition, accessibility, and fairness does.

 

 

 

 


NIST seeks input on cybersecurity framework

Upcoming Cybersecurity Framework workshop this week aims for feedback from private sector on practices that can reduce the risk of cyber attacks

From: http://www.csoonline.com

Cynthia Brumfield, CSO

http://www.csoonline.com/article/print/736080

July 09, 2013

Starting tomorrow, July 10th, in San Diego, the National Institute of Standards and Technology (NIST) will host the third, and perhaps most important, in a series of workshops aimed at developing a voluntary comprehensive cybersecurity framework that will apply across sixteen critical infrastructure sectors.

Mandated by an Executive Order (EO) issued by President Obama on February 12, 2013, the NIST-developed framework represents the first time the federal government has sought to prescribe a wide-ranging approach to protecting critical cyber assets, a tough task that has been characterized by Department of Homeland Security Secretary (DHS) Janet Napolitano as an “experiment.” The framework must be accomplished in preliminary form by October and finalized by February 2014.

During the San Diego workshop, NIST will for the first time delve into details of the emerging framework, which is based on two earlier workshops as well as formal comments NIST received in response to a public notice. To speed things along ahead of the workshop, NIST has issued three reference materials — a draft outline of what the framework might look like, a draft framework “core” that focuses on key organizational functions and a draft compendium that features existing references, guidelines, standards and practices.

Based on the recommendations of industry commenters, NIST has placed a large emphasis in the draft framework on reaching the very senior levels of management, including CEOs and boards of director. Top “officials are best positioned to define and express accountability and responsibility, and to combine threat and vulnerability information with the potential impact to business needs and operational capabilities” NIST states in the draft outline.

This focus on top executives has not surprisingly been praised by industry participants.

“Cybersecurity is just not a technological problem,” Jack Whitsitt, Principal Analyst of energy industry cybersecurity consortium EnergySec said. “This is a business management, business maturity problem. People build what you tell them to build, people build what you fund them to build. Unless we do a better job at the business side of cybersecurity, the problems won’t go away.”

Many cybersecurity experts say that reaching that top level of management is one of the biggest challenges to ensuring adequate cybersecurity protection of critical assets. CEOs, they say, typically engage in “cybersecurity theater,” implementing hollow programs that only pay lip service to the issues.

“The reality is that most of the CEO’s are relying on their trade organizations to ‘fix the problem’ for them,” one top cybersecurity consultant said. “And the trade organizations are one of the loudest voices in the echo chamber convincing themselves that this is all just a bunch of low-probability hype and a stepping stone to more regulation.”

 

Another challenge, at least so far as a federal framework is concerned, is the division of responsibilities among government agencies as spelled out in the EO and accompanying Presidential Policy Directive (PPD). For example, DHS has been assigned a number of tasks under the EO that seem to relate to the framework, such as defining what constitutes critical infrastructure.

Some asset owners have suggested that there are too many moving parts in the overall cybersecurity landscape and have noted rising tensions between NIST, an arm of the Commerce Department, and DHS.

 

“NIST and DHS aren’t doing a good job in deciding how this is going to work,” one expert noted.

 

But one senior government official overseeing the process said that many cybersecurity efforts in the EO and PPD just aren’t relevant to how the framework gets developed.

 

“The framework is supposed to work for the widest range of industries” and therefore it doesn’t matter how critical infrastructure gets defined, for example.

 

“DHS is making the decision that has no bearing on this framework,” he said, adding that it is likely that the list of critical infrastructure assets won’t be made public anyway.

 

Yet another challenge is the degree to which the framework process is being shaped by technology vendors and consultants, who far outnumber asset owners in the workshop meetings held to date. Although NIST wants to bake-in cybersecurity through vendor-supplied technology, thereby ensuring that even small organizations which lack resources to pay cybersecurity specialists are guaranteed basic protection, some asset owners balk at being force-fed technology that may better fit vendor agendas than their own safety. One telecom cybersecurity specialist said he wished that NIST would separate asset owners from vendors and consultants in the workshop sessions.

 

Despite these challenges, most of the participants in the process believe that NIST is on track and that the draft framework materials released for the July workshop meet expectations. However, the real action will take place at the workshop as NIST go into greater detail on where they’re headed with the framework.

 

With only about three months left to meet the October deadline, most of the key players are taking a wait-and-see attitude, hoping to gain a better sense of the situation until after the workshop in San Diego. As one telecom industry representative said “we have to see whether this whole process has the result we’re looking for, which is to improve our cybersecurity posture, and not some feel-good government exercise.”

 

Cynthia Brumfield, President of DCT Associates, is a veteran communications industry and technology analyst. She is currently leading a variety of research, analysis, consulting and publishing initiatives, with a particular focus on cybersecurity issues in the energy and telecom arenas.

 

 

North Dakota company specializes in aerial crop imagery

UASNews

by Press • 9 July 2013

By: Jonathan Knutson

 

GRAND FORKS, N.D. — When David Dvorak launched Field of View in 2010, he foresaw a bright future for aerial crop imagery. Today, after working with farmers, agronomists and even a South American plantation manager, he’s more optimistic than ever.

“A few years ago, there was some behind-the-scenes interest in this,” says Dvorak, CEO of Grand Forks, N.D.-based Field of View.

Now, “I’m quietly confident there’s this perfect storm brewing where the precision agriculture market really takes off and the civil UAS (unmanned aircraft system) market takes off. They’re both on a trajectory to make that happen about the same time,” he says.

Field of View’s mission is to “bridge the gap between unmanned aircraft and precision agriculture,” according to the company’s website.

Its flagship product, GeoSnap, is an add-on device for multispectral cameras mounted on either manned or unmanned aircraft. Such cameras capture images in the red, green and near-infared bands, allowing users to visualize plant stress better than they can with most other camera systems, Dvorak says.

GeoSnap takes images captured by the multispectral camera and maps them with real-world coordinates, a process known as georeferencing. That allows users to know the aerial images’ exact location on the ground.

“It’s a very complex process. We developed a product that hopefully makes the process easier,” Dvorak says.

GeoSnap costs about $5,000 per unit, with the multispectral cameras costing about $4,000 each.

Field of View only recently began selling the add-on devices. So far, the company has sold a half-dozen, including one to NASA.

Dvorak thinks NASA will use the GeoSnap to learn more about vegetative cover on Earth, though he isn’t sure of specifics.

GeoSnap generally has drawn more interest overseas because other countries have fewer restrictions on air space, he says.

– See more at: http://www.prairiebizmag.com/event/article/id/15187/#sthash.KONAAcs5.dpuf

 

Hagel warns senators of 2014 budget dangers

FCW.com

By Amber Corrin

Jul 10, 2013

In a July 10 letter to lawmakers on the Senate Armed Services Committee, Defense Secretary Chuck Hagel warned of potentially dire threats to national security if Congress fails to reverse steep budget cuts for the 2014 fiscal year.

Hagel advised lawmakers that a potential $52 billion budget cut for fiscal 2014, which would be mandated under sequester spending caps imposed by the 2011 Budget Control Act, would continue to erode military readiness and weaken national defenses.

“I strongly oppose cuts of that magnitude because, if they remain in place for FY 2014 and beyond, the size, readiness and technological superiority of our military will be reduced, placing at much greater risk the country’s ability to meet our current national security commitments,” Hagel wrote in to Sens. Carl Levin and James Inhofe, the committee’s chairman and ranking member, respectively. “This outcome is unacceptable as it would limit the country’s options in the event of a major new national security contingency.”

The secretary warned that “draconian actions” would be necessary to meet the budget-cut requirements. His comments stem from findings in the Strategic Choices and Management Review he directed earlier this year.

Such moves could include ongoing hiring freezes and layoffs as Defense Department officials seek to avert a second year of furloughs. Cutbacks in training and readiness could continue, and investments in areas such as research and development would also decline. DOD’s sustained efforts in acquisition reform additionally would take a hit, he said.

“The department hopes to avoid a second year of furloughs of civilian personnel, but DOD will have to consider involuntary reductions in force to reduce civilian personnel costs,” Hagel wrote. “The resulting slowdown in modernization would reduce our long-term, critically important and historic technological superiority and undermine our better buying power initiatives.”

Hagel called on members of Congress to cooperate with the Pentagon, the White House and each other to help mitigate what he deemed to be serious adverse consequences. He urged congressional support for controversial measures proposed by President Barack Obama in his 2014 budget, including slowed growth in military pay raises, increased TRICARE fees and the retirement or cancelation of lower-priority weapons programs.

Hagel also asked Congress to eliminate restrictions on military drawdown timelines and firing practices to reduce poor-performing civilian personnel, and reiterated his push for another round of the Base Realignment and Closure Act.

Training and modernization remain poised to take the biggest hits in the 10 percent across-the-board cuts. Cutbacks in training programs already in place under this year’s sequestration would have to continue or be accelerated, putting troops and citizens at greater risk, Hagel wrote. New programs would be hard-hit as well.

“DOD would be forced to sharply reduce funding for procurement, [research, development, testing and evaluation] and military construction. Indeed, cuts of 15 percent to 20 percent might well be necessary,” Hagel said. “Marked cuts in investment funding, especially if they continue for several years, would slow future technology improvements and ay erode the technological superiority enjoyed by U.S. forces.”

He also warned that cuts would spill over into private industry as purchases and acquisition plans would be interrupted and costs increased.

“Defense industry jobs would be lost and, as prime contractors pull back and work to protect their internal work forces, small businesses may experience disproportionately large job losses,” Hagel wrote.

 

Sequestration Would Force Civilian Personnel Cuts in 2014, Hagel Says

http://www.govexec.com/defense/2013/07/sequestration-would-force-civilian-personnel-cuts-2014-hagel-says/66482/

By Eric Katz

July 11, 2013

The Defense Department is considering civilian reductions in force in fiscal 2014 to match reduced budget levels required by sequestration.

In a letter to the Senate Armed Services Committee, Defense Secretary Chuck Hagel said that while he is “fully committed” to enacting President Obama’s budget, he is currently planning a  “contingency plan” in case sequestration remains in effect.

“DoD is hoping to avoid furloughs of civilian personnel in fiscal year 2014,” Hagel wrote, “but the department might have to consider mandatory reductions in force.”

Hagel added the RIFs do not offer much in the way of immediate savings, but would help the department reach future budget caps. The Pentagon would have to slash $52 billion from its budget next year if Congress fails to strike a deal to end sequestration.

“While painful,” Hagel wrote, “RIFs would permit DoD to make targeted cuts in civilian personnel levels rather than the more across-the-board cuts associated with furloughs.”

Military personnel would fare better, as their funding cuts would be “disproportionately small” due to separation costs. If Congress moves forward with its plan to raise military pay 1.8 percent — rather than the 1 percent Obama called for — implementing sequester cuts would be even more difficult, Hagel said.

The Defense Department could severely trim military personnel, but it would require halting accessions, ending permanent-change-of-station moves, stopping discretionary bonuses and freezing promotions. As the Pentagon has repeatedly emphasized, continued cuts would also negatively affect maintenance, modernization and readiness.

“In sum,” Hagel said, “the abrupt, deep cuts caused by the [2011 Budget Control Act] caps in FY 2014 will force DoD to make non-strategic changes. If the cuts continue, the department will have to make sharp cuts with far reaching consequences, including limiting combat power, reducing readiness and undermining the national security interests of the United States.” 

 

What I learned from researching almost every single smart watch that has been rumored or announced

Quartz

By Christopher Mims

July 11, 2013

http://qz.com/102646

Smart watches! I sure hope you like them, because literally everyone is developing one. And yet, given the vanishingly small proportion of watches that are “smart,” clearly, something is holding them back. Here are the trends that jumped out when I undertook a more or less comprehensive catalog of the forthcoming wrist-top wearables.

Smart watches are going to be big. As in physically large.

I hope you have man hands, because the average smart watch is going to have a 1.5″ display and look like one of those oversize G-shock watches that are favored by IT support guys and gym coaches. Some smart watches are actually just smartphones with a wrist band, and therefore truly gigantic.

Insufficient battery life is killing the smart watch dream.


This chart is old, but it illustrates a trend that continues to this day. (I asked the man who created it for an update, and he says none exists.) The bottom line: Moore’s law does not apply to batteries. That is, every year, we get more processing power per watt of electricity we put into a microprocessor, but battery technology is not proceeding at the same pace.

That’s a problem for a device that needs to be connected to a smartphone (via bluetooth) and/or a cell phone network. Those radios will kill your battery. (Incidentally, turning them off is the single best way to preserve the battery life of your smartphone.) And the color, back-lit, 1.5″ LCD display mentioned above? It’s not doing your smart watch battery any favors, either.

The result of all this are smart watches with only three to four days of battery life, and that’s likely to be reduced significantly as developers find new ways to make smart watches useful (and therefore force them to use their radios and change their displays more often).

Some manufacturers are talking about adding things like inductive (i.e. wireless) charging to their smart watches. That will add bulk, but dropping your watch on a charging pad every night might be way less annoying than remembering to plug it in alongside your smartphone.

Smart watches are going to come with a variety of intriguing display technologies not seen elsewhere.

Nothing  begets creativity like constraints, and given the battery issues outlined above, some makers of smart watches are turning, or have already resorted to, display technologies that require less power than traditional LCD displays.

Qualcomm’s rumored smart watch, for example, supposedly uses Mirasol, a kind of reflective, full-color display that requires no power unless it’s being updated. (Mirasol displays color by refracting light like a butterfly’s wings, rather than emitting actual red, green and blue light, like an LCD.) The Pebble smart watch uses an e-paper display like that found in the Kindle and many other e-readers. And the Agent smart watch, which just raised a million dollars on Kickstarter, uses a black and white “memory LCD” produced by Sharp, which unveiled the technology in 2008 and has been trying to find a suitably mass-market use ever since.

All of the non-LCD displays represent a compromise of some kind, when compared to the bright, extra-sharp LCD displays we’ve become accustomed to on our smartphones. This will make smart watches less a “second screen” than a place to push updates like Facebook alerts and text messages. If that sounds less useful than, say, a little smartphone, well that’s one more reason smart watches have yet to take off.

Smart watches could be the next netbooks—in other words, a huge flop.

Samsung, Apple, Google, Microsoft, LG, Qualcomm, Sony—they’re all pouring money into smart watches, but so far every indication is that the devices they’re working on are at best their take on the existing smart watch concept, which frankly isn’t all that compelling. But every consumer electronics manufacturer is looking for the next iPhone or tablet, anything to stop the red ink in their PC divisions.

Or smart watches could allow for the kind of unobtrusive, always-on computing that is the promise of Google Glass.

 

Thanks, local retailer, for letting me know I should buy this thing online.EmoPulse

The same constraints that are forcing smart watch designers to get creative with their displays are also forcing them to come up with something better for these things to do than save you the three seconds it takes to get your phone out and read a text message. For example, the wrist is a logical place to put the kind of RFID chips that allow “digital wallets”—just touch your watch to the payment pad, and you’re done. Or maybe your watch helps you not to forget your keys, wallet and anything else that’s critical, as you run out the door. Or even, maybe your smart watch makes it less likely you’ll be shot with your own gun. The possibilities are endless, and that’s probably what keeps backers coming back to smart watch projects on Kickstarter. Whether or not the mega-corporations rolling them out will find ways to answer these needs with their mass market products remains to be seen.

Demand for laptops is so weak that analysts have declared all of 2013 a “write-off”

Quartz

By Christopher Mims @mims

July 10, 2013

http://qz.com/102435

Demand for laptop computers is so weak in the first half of 2013 that the analysts at IHS iSupply have declared it virtually impossible that the overall market for laptop and desktop PCs will grow in 2013 over 2012. It’s the same death-of-the-PC-story we’ve heard before, only now the infection has spread to laptops as well. The numbers:

  • 6.9% drop in laptop shipments between the first and second quarters of 2013. That’s twice the 3.7% drop seen in 2002 after the dot-com bust.
  • Compare that to a 41.7% increase in laptop shipments from Q1 to Q2 of 2010. Typically, the second quarter of the year sees a sharp uptick in purchases of notebook computers, a bounce-back after soft demand in the beginning of the year.
  • 2013 will be the second year in a row in which PC shipments shrank overall. Unless a miracle happens in the second half of 2013, the PC industry is going to have to face the fact that its decade of expansion, from 2001 to 2011, is over.

The culprit in all of this? “Media tablets,” says iSupply. And those are only becoming more versatile at the high end, more affordable at the low end, and more popular overall. Given those trends, could 2014 be the third year in a row that PC sales decline? It would be unprecedented, but manufacturers can’t rule it out.

 

 

Report: Use of coal to generate power rises

Miami Herald

By NEELA BANERJEE
Tribune Washington Bureau

Posted on Wed, Jul. 10, 2013

http://www.miamiherald.com/2013/07/10/v-print/3494445/report-use-of-coal-to-generate.html#storylink=cpy

Power plants in the United States are burning coal more often to generate electricity, reversing the growing use of natural gas and threatening to increase domestic emissions of greenhouse gases after a period of decline, according to a federal report.

Coal’s share of total domestic power generation in the first four months of 2013 averaged 39.5 percent, compared with 35.4 percent during the same period last year, according to the Energy Information Administration, the analytical branch of the Energy Department.

By contrast, natural gas generation averaged about 25.8 percent this year, compared with 29.5 percent a year earlier, the agency said in its most recent “Short-Term Energy Outlook.”

With coal prices dropping and gas prices rising, the agency said it expected the use of coal to remain on the upswing, accounting for 40.1 percent of electricity generation through 2014. Natural gas would fuel about 27.3 percent.

Power plants are the single largest source of greenhouse gases that drive climate change. The growing use of coal is occurring against the backdrop of President Barack Obama’s announcement of a sweeping plan to reduce greenhouse gases, including curtailing emissions from power plants. His initiative has already sparked opposition from the coal industry, congressional Republicans and coal-state politicians.

Opponents say new regulations are unnecessary in part because utilities have relied more on natural gas, which emits less heat-trapping carbon dioxide than coal does. But the new data indicate that power plants will readily return to coal if the price of natural gas gets too high.

“Markets on their own may go in your direction for a period of time, but to ensure that we get reductions in greenhouse gas emissions in a significant, sustained way, you’re going to need government intervention,” said James Bradbury of the World Resources Institute, a Washington think tank.

The energy administration estimated that carbon dioxide emissions from fossil fuels would rise by 2.4 percent in 2013 and 0.6 percent in 2014, after falling about 3.9 percent in 2012.

“The increase in emissions over the forecast period primarily reflects the projected increase in coal use for electricity generation, especially in 2013 as it rebounds from the 2012 decline,” the report said.

In a speech last month, Obama directed the Environmental Protection Agency to propose rules by June 2014 to cut greenhouse gas emissions from power plants. A rule for new power plants is expected by September.

Coal-fired generation accounted for about 50 percent of the electricity produced in the U.S. about a decade ago. But a natural gas boom driven by hydraulic fracturing has pushed down prices, making natural gas more competitive with coal. By April of last year, coal and natural gas each produced about one-third of the country’s power.

Lower demand for coal drove down its average price, said Elias Johnson, a coal industry expert for the agency. At the same time, the price of natural gas ticked upward, buoyed by demand and somewhat reduced production.

Utilities, many of which have natural gas and coal plants, will probably toggle between the two fuels in the near term, burning the cheaper one more often.

“What is the least expensive form of generation gets dispatched first: renewables, hydro, then maybe nuclear and then coal or natural gas,” said Karen Obenshain of the Edison Electric Institute, a utility trade group in Washington.

Coal is not expected to grab a 50 percent share of power generation again because new regulations curtailing mercury emissions from power plants will probably shutter many small, older coal plants, said Mark McCullough of American Electric Power, one of the country’s largest coal-fired utilities. Even with such closures, the U.S. will probably fail to sharply reduce greenhouse gas emissions by 2020, a goal set by Obama in 2009, without a comprehensive effort to address carbon dioxide pollution.

Said Bradbury, “Electricity markets are very dynamic, and while there’s been a lot of press about the success story of the benefits of natural gas, it’s important to realize that that’s temporary and it depends on gas prices staying really low, and we’re starting to see there are these thresholds where utilities will switch back to higher-carbon fuel, like coal.”

 

Does Wearable Tech Have A Place In The Enterprise?

Posted by Dan Swinhoe

on July 04 2013

http://www.idgconnect.com/blog-abstract/2544/does-wearable-tech-have-a-place-in-the-enterprise

This week saw the first Pebble smartwatches selling online. Sony, Acer, Google, Apple, Foxconn and Samsung are all working on their own versions. The era of wearable tech is within sight.

According to Juniper research, almost 70 million smart wearable devices will be sold in 2017, and the market will be worth more than $1.5 billion by 2014. ST Liew, president of Acer’s smartphone group, told Pocket-Lint “We are looking at wearable, I think every consumer company should be looking at wearable.” While that might be true, should enterprises be doing the same?

Right now wearable tech is mostly for sporty types; heart rate monitors, fancy pedometers, HUD for skiers etc. But soon the market will be flooded with a tidal wave of smartwatches and Google Glass. And while this will no doubt affect how companies collect user data, develop apps and interact with consumers, will we be seeing workers around the office or datacenter wearing them?


Rose-Tinted Google glass?

Smartwatches probably won’t be essential to any enterprise mobility program, merely a notification tool with additional security pains to account for. But despite being banned in many places before it’s even released, Google Glass is getting plenty of people excited.

So far most of it has been on the consumer side of things. Some doubt whether it could ever be used for the enterprise, while others think it’s the best thing since sliced bread (or the Cloud at least). Chris Hazelton of 451 Research told Computerworld it would be the next step in Mobility & BYOD trends, which would eventually help drive its acceptance.

Fiberlink have jumped on board early, offering its MaaS360 platform to IT admins through the device, and said that since most EMM and MDM platforms support Android already, much of the hard work is already done. Meanwhile Dito, a company that provides services for Google applications, have promised enterprise apps for Glass  (AKA Glassware) by late 2013/early 2014. The company’s co-founder, Dan McNelis, explained at the E2 conference that one of its clients was looking at building information modelling, or BIM, applications, which could help construction workers on site check schematics and that everything was in the write place/angle.

Along with construction, Glass has been cited as a hands-free tool for utility workers while dealing with high voltage, or as a potential HUD for pilots, and possibly even real-time polling.

Though facial recognition might be banned, the core concept of early Glassware apps MedRef – brining up a person’s medical records instantly – highlights the potential wearable gear has within the healthcare industry. Whether it’s tracking nurses with RTLS (Real-Time Location Systems) or better diagnosis and delivery methods, or even live from the operating table, hospitals could be wearable tech’s first home outside the sports ground.

It’s not just glasses and watches that are entering the enterprise. A smart bracelet for workers at risk of being kidnapped has been developed, sending pre-set warnings to social media and other workers in the area, while Motorola has developed some heavy duty engineering gear more tailored towards their needs and is also customizable. A new smartring has been developed by Chinese company  Geak, which has great potential for being a very useful security/authentication tool. I can see far more of a market for specially tailored wearable tech arising once the bluster over Glass & smartwatches has died down.

So does wearable tech have a place in business, or is it just another consumer procrastination device? I think some do, especially if they’ve been custom-made for the purpose. But I doubt we’ll be seeing an office full of smart this and wearable that.The future success of the likes of Google Glass or any number of future smartwatches will depend entirely on the quality of the hardware & apps provided, and the imagination of those using them.

 I also agree with Hazelton’s view that BYOWD (Bring-Your-Own-Wearable-Device) will be an important factor.

 

Quinoa should be taking over the world. This is why it isn’t.

Washington Post

By Lydia DePillis, Updated: July 11, 2013

 

In the Andean highlands of Bolivia and Peru, the broom-like, purple-flowered goosefoot plant is spreading over the barren hillsides–further and further every spring. When it’s dried, threshed, and processed through special machines, the plant yields a golden stream of seeds called quinoa, a protein-rich foodstuff that’s been a staple of poor communities here for millennia. Now, quinoa exports have brought cash raining down on the dry land, which farmers have converted into new clothes, richer diets, and shiny vehicles.

But at the moment, the Andeans aren’t supplying enough of the ancient grain. A few thousand miles north, at a downtown Washington D.C. outlet of the fast-casual Freshii chain one recent evening, a sign delivered unpleasant news: “As a result of issues beyond Freshii’s control, Quinoa is not available.” Strong worldwide demand, the sign explained, had led to a shortage. A Freshii spokeswoman said that prices had suddenly spiked, and the company gave franchises the choice to either eat the cost or pull the ingredient while they renegotiated their contract.

Quinoa is a low-calorie, gluten-free, high-protein grain that tastes great. Its popularity has exploded in the last several years, particularly among affluent, health-conscious Americans. But the kinks that kept the grain out of Freshii that day are emblematic of the hurdles it will face to becoming a truly widespread global commodity and a major part of Americans’ diet. It shows the crucial role of global agribusiness, big-ticket infrastructure investment, and trade in bringing us the things we eat, whether we like it or not.

In short, it’s hard to keep something on the menu if you might not be able to afford it the next day. And the American agricultural economy makes it hard for a new product to reach the kind of steady prices and day-in-day-out supply that it takes to make it big.

 

A grain whose time has come

Quinoa went extinct in the United States long before upscale lunch places started putting it in side salads. Agronomists have found evidence of its cultivation in the Mississippi Valley dating back to the first millennium AD, but it faded away after farmers opted for higher-yielding corn, squash, and bean crops.

 

Enthusiasts started growing quinoa again in the 1980s, mostly in the mountains of Colorado. It’s not easy, though–sometimes it takes several seasons to get any harvest, since seeds can crack, get overtaken by weeds, or die off because of excessive heat or cold. In 2012, the U.S. accounted for a negligible amount of the 200 million pounds produced worldwide, with more than 90 percent coming from Bolivia and Peru.

Demand started to ramp up in 2007, when Customs data show that the U.S. imported 7.3 million pounds of quinoa. Costco, Trader Joe’s, and Whole Foods began carrying the seed soon after, and the U.S. bought 57.6 million pounds in 2012, with 2013 imports projected at 68 million pounds. And yet, prices are skyrocketing; they tripled between 2006 and 2011, and now hover between $4.50 and $8 per pound on the shelf.

What’s driving the increase? Part of it is that Peru itself, already the world’s biggest consumer of quinoa, patriotically started including the stuff in school lunch subsidies and maternal welfare programs. Then there’s the United Nations, which declared 2013 the International Year of Quinoa, partly in order to raise awareness of the crop beyond its traditional roots.

But it’s also about the demographics of the end-user in developed countries–the kind of people who don’t think twice about paying five bucks for a little box of something with such good-for-you buzz. A few blocks away from Freshii in Washington D.C. is the Protein Bar, a four-year-old Chicago-based chain that uses between 75 and 100 pounds of quinoa per week in its stores for salads and bowls that run from $6 to $10 each (Their slogan: “We do healthy…healthier”).

Right now, the company has so far decided to absorb the higher prices, which still aren’t as much of a cost factor as beef and chicken. It will even pay a little extra to ship the good stuff from South America, rather than the grainier variety that Canada has developed.

“As much as I don’t like it–you never want to pay more for your raw materials–it’s central to our menu,” says CEO Matt Matros. “I’m pretty positive that as the world catches on to what a great product is, the supply will go up and the price will come back down. It’ll come down to the best product for us. If we find that the American quinoa is as fluffy, then we’ll definitely make the switch.”

Cracking the quinoa code

The Andean smallholders are trying to keep up with the demand. They’ve put more and more land into quinoa in recent years; Bolivia had 400 square miles under cultivation last year, up from 240 in 2009. The arid, cool land that quinoa needs was plentiful, since little else could grow there. And thus far, that trait has made it difficult to grow elsewhere.

But that doesn’t mean the rest of the world isn’t trying. A Peruvian university has developed a variety that will grow in coastal climates. There are also promising breeding programs in Argentina, Ecuador, Denmark, Chile, and Pakistan. Washington State University has been developing varieties for cultivation in the Pacific Northwest, and in August will hold a quinoa symposium bringing together researchers from all over to talk about their work.

 

“To me, the imagination is the limit, and a whole lot of effort,” says Rick Jellen, chair of the plant and wildlife sciences department at Brigham Young University. “Quinoa is a plant that produces a tremendous amount of seed. So you have potential, with intensive selection, to identify variants that have unusual characteristics.”

The South American quinoa industry, and the importers who care about it, are worried about the coming worldwide explosion of their native crop. Despite a bubble of media coverage earlier this year about how strong demand is making it difficult for Bolivians to afford to eat what they grow, it’s also boosted incomes from about $35 per family per month to about $220, boosting their standards of living dramatically. Now, the worry is maintaining a steady income level when production takes off around the world.

Sergio Nunez de Arco, a native Bolivian who in 2004 helped found an import company called Andean Naturals in California, likes to show the small-scale farmers he buys from pictures of quinoa trucks in Canada to prove that the rest of the world is gaining on them, and that they need to invest in better equipment. Meanwhile, he’s trying to develop awareness about the importance of quinoa to reducing poverty, so that they can charge a fair trade price when the quinoa glut comes.

“The market has this natural tendency to commoditize things. There’s no longer a face, a place, it’s just quinoa,” de Arco says. “We’re at this inflection point where we want people to know where their quinoa is coming from, and the consumer actually is willing to pay them a little more so they do put their kids through school.”

He’s even helping a couple of Bolivian farmers who don’t speak English very well fly to that Washington State University conference, so they’ll at least be represented.

“It kind of hurts that the guys who’ve been doing this for 4,000 years aren’t even present,” de Arco says. “‘You guys are awesome, but your stuff is antiquated, so move over, a new age of quinoa is coming.'”

Why isn’t the U.S. growing more of it?

So far, though, the mystery is why the new age of quinoa is taking so long to arrive.

Americans have been aware of the crop for decades, and used to produce 37 percent of the world supply, according to former Colorado state agronomist Duane Johnson. It never took off, partly because of pressure from advocates of indigenous farmers–in the 1990s, Colorado State University researchers received a patent on a quinoa variety, but dropped it after Bolivian producers protested it would destroy their livelihoods.

You don’t need a patent to grow a crop, of course. But the switching cost is extremely high, says Cynthia Harriman of the Whole Grains Council. “Can you get a loan from your bank, when the loan officer knows nothing about quinoa? Will he or she say, ‘stick to soybeans or corn?'” It even requires different kinds of transportation equipment. “If you grow quinoa up in the high Rockies, where are the rail cars that can haul away your crop? Or the roads suitable for large trucks?”

All that infrastructure costs money, and the only farmers with lots of money are in industrial agribusiness. But U.S. industry has shown little interest in developing the ancient grain. Kellogg uses quinoa in one granola bar, and PepsiCo’s Quaker Oats owns a quinoa brand, but the biggest grain processors–Cargill and Archer Daniels Midland–say they’ve got no plans to start sourcing it. Monsanto, the world’s largest seed producer, has nothing either.

Instead, their research and development dollars are focused entirely on developing newer, more pest-resistant forms of corn, soybeans, wheat, sugar, and other staples. All of those crops have their own corporate lobbying associations, government subsidy programs, and academic departments devoted to maintaining production and consumption. Against that, a few researchers and independent farmers trying to increase quinoa supply don’t have much of a chance.

“This is something where it would truly have to come from the demand side–no one wants to get into this and get stuck with all this excess inventory,” says Marc Bellemare, an agricultural economist at Duke University. And how do you determine how much demand is enough, or whether a fad has staying power? “We still haven’t fully unbundled what the decision bundle is. It’s like shining a flashlight in a big dark room.”

That’s why it’s hard for any new crop to make the transition from niche to mainstream. Products, maybe: Soy milk is ubiquitous now, after years as a marginal hippie thing, but it comes from a plant that U.S. farmers have grown for decades. An entirely new species is something else altogether. “I wouldn’t even go so far as to say that’s a non-staple that went big-time,” Bellemare says.

For that reason, quinoa prices are likely to remain volatile for a long while yet. Brigham Young’s Rick Jellen says the lack of research funding for quinoa–relative to the other large crop programs–means that even if they come up with a more versatile strain, it won’t have the resilience to survive an infestation.

“Once that production moves down to a more benign environment, you’re going to get three or four years of very good production,” he predicts. “And then you’re going to hit a wall, you’re going to have a pest come in, and it’s going to wreak havoc on the crop. I think we’re going to see big fluctuations in quinoa prices until someone with money has the vision and is willing to take the risk to invest to really start a long-term breeding program for the crop.”

Which means that if you’re looking forward to a quinoa lunch in downtown D.C., be prepared for a disappointment.

 

Defcon founder’s message to feds fair to some, hypocritical to others

Dis-invitation is interesting because last year Defcon opened with General Keith Alexander, director of the National Security Agency

Jaikumar Vijayan

July 12, 2013 (Computerworld)

Defcon founder Jeff Moss’ request to government agencies asking them not to attend next month’s annual Defcon hacker conference has evoked a mixed response from the security community.

Many see it as little more than a symbolic gesture meant to convey the hacker community’s discomfort over recent revelations of government surveillance activities by fugitive document-leaker Edward Snowden.

Others though see it as somewhat hypocritical move by an organization that has for long prided itself on giving a platform for all members of the security community to exchange ideas and share information freely.

Two researchers from the network security-consulting firm Secure Ideas on Thursday announced that they would not present at Defcon as scheduled, to protest Moss’ actions.

Moss launched Defcon 21 years ago and has overseen its growth into one of the industry’s largest hacker conferences. On Wednesday, he published a blog post in which he asked government agencies to “call a time-out” from the conference.

“For over two decades Defcon has been an open nexus of hacker culture, a place where seasoned pros, hackers, academics, and feds can meet, share ideas and party on neutral territory. Our community operates in the spirit of openness, verified trust, and mutual respect,” he wrote.

“When it comes to sharing and socializing with feds, recent revelations have made many in the community uncomfortable about this relationship,” he said in asking them not to attend Defcon this year.

The dis-invitation is interesting because it was only last year that Defcon had opened with a keynote from General Keith Alexander, director of the National Security Agency, the entity at the center of the surveillance controversy.

“Jeff Moss’s post was a statement, not an order, but it was an important one,” said Michael Sutton, a vice president of security research with Zscaler.

Moss is well respected within both the black hat and white hat communities and has strong government connections in his role as an advisor to the U.S. Department of Homeland Security (DHS), Sutton noted.

“His statement illustrates the deep disappointment of the Defcon community, who feel that they were blatantly lied to in light of the PRISIM scandal,” he said referring to Alexander’s denials last year when asked at the conference if the NSA was spying on U.S. citizens.

“Jeff is standing up for the community by saying ‘you disrespected us in our own house — we’d prefer you not visit this year’,” Sutton said.

For many at Defcon, Edward Snowden’s recent revelations of widespread NSA surveillance activities are likely to have only reinforced their suspicion of all things government, said Richard Stiennon, principal at IT-Harvest.

With Defcon, there’s always been a bit of the “young generation versus the Man,” Stiennon noted. In recent years, NSA and other three-letter government agencies have been recruiting from Defcon ranks, leading to a gradual thawing in relations between the two communities, he said. Even so, members of the Defcon community have only shown a “wary willingness” to interact with government types at best.

 

That willingness likely has been tested by the Snowden affair, Stiennon noted. “A group of security professionals who are aligned to doing things and creating things that are protective of security and privacy and going to find themselves at odds with the NSA. So it may be best for both sides to cool off a bit,” Stiennon noted.

Lawrence Pingree, an analyst at Gartner cautioned against making too much of Moss’ statement. From a publicity standpoint, it makes a certain amount of sense to ask federal agencies not to attend Defcon, considering the sentiments that have been aroused by Snowden’s revelations, he said.

In reality, it is unlikely that Moss will want to, or will even be able to stop government security types from attending the event if they really want to, he said.

In the end Moss is just sending a gentle reminder to the government that they are likely to be less than welcome among those at Defcon considering recent revelations about PRISM, said Robert Hansen, a white hat hacker and director of product management at WhiteHat Security.

“I don’t believe that anyone who works directly with the staff at Defcon really hates feds,” said Robert Hansen, a white hat hacker and director of product management at WhiteHat Security. “What they hate are that the free and open Internet has been taken from them in some sense and that theft is embodied in some sense by the people who are tasked with fulfilling often secret laws.”

“The only issue I see with Jeff’s announcement is that a lot of the most important, die-hard, freedom advocates work in or work directly with the military industrial complex, and it’s unfair to paint them as the enemy of hackers,” Hansen noted. “Though Jeff has never said that directly, and I don’t believe he feels that way, I worry that people less familiar with the situation would mis-represent his words.”

Others though see Moss’ stance as needlessly politicizing the annual hacker fest.

In a blog post, James Jardine and Kevin Johnson, two researchers from Secure Ideas, announced they would not present at Defcon this year citing Moss’ statement about not wanting the government at the show, as the reason.

“The basis of our decision, is that we feel strongly that Defcon has always presented a neutral ground that encouraged open communication among the community, despite the industry background and diversity of motives to attend,” the blog noted. “We believe the exclusion of the ‘fed’ this year does the exact opposite at a critical time.”

Ira Winkler, president of the Information Systems Security Association, and a Computerworld columnist said that Moss was being unfair in asking the federal government not to attend Defcon.

Much of Defcon’s popularity has stemmed from the effort put into making it completely neutral venue for the information security community. By asking the government to stay away, Defcon has lost some of that neutrality, he said.

 

The surveillance activities revealed by Snowden, and that Moss alluded to in his statement, have all been found to be completely legitimate and vetted by all three branches of the government. So rather than try and exclude government agencies, it would have been better to use Defcon as an opportunity to get more answers on the surveillance practices, he said.

“It would be better to have a legitimate discussion on the issue,” Winkler said. “Why is it legal, why is it constitutional. Stopping a group of people from attending goes against the spirit of what Defcon is supposed to be,” he said.

Defcon has always thrived on presenting controversial security topics and has gone out of the way to make it possible for people to do so, Winkler noted.

“Why is the government being singled out when no group has been singled out and prevented from speaking,” he said.

Advertisements

From → Uncategorized

Comments are closed.

%d bloggers like this: