Skip to content

March 31, 2012

April 2, 2012

31March2012

Newswire

 

Officials: White House offers to curtail drones

By KIMBERLY DOZIER, AP Intelligence Writer

Monday, March 26, 2012

(03-26) 23:19 PDT WASHINGTON, (AP) —

In a bid to save the CIA’s drone campaign against al-Qaida in Pakistan, U.S. officials offered key concessions to Pakistan’s spy chief that included advance notice and limits on the types of targets. But the offers were flatly rejected, leaving U.S.-Pakistani relations strained as President Barack Obama prepares to meet Tuesday with Pakistan’s prime minister.

CIA Director David Petraeus, who met with Pakistan’s then-spy chief, Lt. Gen. Ahmed Shuja Pasha at a meeting in London in January, offered to give Pakistan advance notice of future CIA drone strikes against targets on its territory in a bid to keep Pakistan from blocking the strikes — arguably one of the most potent U.S. tools against al-Qaida.

The CIA chief also offered to apply new limits on the types of targets hit, said a senior U.S. intelligence official briefed on the meetings. No longer would large groups of armed men rate near-automatic action, as they had in the past — one of the so-called “signature” strikes, where CIA targeters deemed certain groups and behavior as clearly indicative of militant activity.

Pasha said then what Pakistani officials and its parliament have repeated in recent days: that Pakistan will no longer brook independent U.S. action on its territory by CIA drones, two Pakistani officials said. All the officials spoke on condition of anonymity to discuss the sensitive negotiations.

Pasha went further, saying Pakistan’s intelligence service would no longer carry out joint raids with U.S. counterterrorist teams inside its country, as it had in the past. Instead, Pakistan would demand that the U.S. hand over the intelligence, so its forces could pursue targets on their own in urban areas, or send the Pakistani army or jets to attack the targets in the tribal areas, explained a senior Pakistani official.

The breakdown in U.S.-Pakistani relations follows a series of incidents throughout 2011 that have marred trust — from a CIA security officer who shot dead two alleged Pakistani assailants, to the U.S. Navy SEAL raid that killed Osama bin Laden in May, to the border incident where U.S. forces returned fire they believed came from a Pakistani border post, killing 24 Pakistani troops. The diplomatic fallout has led to the ejection of U.S. military trainers who’d worked closely with Pakistani counter-insurgent forces, slowed CIA drone strikes, and almost halted the once-common joint raids and investigations by Pakistan’s intelligence service together with the CIA and FBI.

Pasha’s pronouncements were in line with the Pakistani parliament’s demands issued last week that included ceasing all U.S. drone strikes as part of what Pakistani politicians call a “total reset” in its relationship. Pakistan’s parliament last week demanded cessation of all unilateral U.S. actions including the drone strikes.

Other U.S. officials said no such concessions were offered to Pasha, and insisted U.S. counterterrorism actions continued as before.

Deputy National Security Adviser Ben Rhodes, traveling with Obama in South Korea, said the administration was in constant contact with Pakistan about counterterrorism and said reports of strain between the two sides “did not represent the ongoing nature of the dialogue with Pakistanis.”

“Frankly, we’ve seen significant success in those counterterrorism operations because U.S.-Pakistani cooperation has enabled the devastation, really, of al-Qaida leadership in that part of the world,” Rhodes said.

Still, the hardening positions on both sides set up a potentially rocky meeting ahead between Obama and Pakistani Prime Minister Yusuf Raza Gilani in South Korea on Tuesday, on the sidelines of the Nuclear Security Summit. President Asif Ali Zardari met with special representative to Afghanistan and Pakistan Ambassador Mark Grossman in Tajikistan this week, and Central Command chief Gen. James Mattis is headed to Pakistan in April.

Complicating efforts to restore relations are the demands made by a Pakistani parliamentary committee.

A personality change at the top of the Pakistani Inter-Services Intelligence is another wrinkle, with Pasha now replaced by Army Lt. Gen. Zaheerul Islam officially last week, a senior U.S. official said. While Islam has spent time studying at U.S. military institutions, and once served as deputy to the ISI, he is a mostly unknown quantity to U.S. officials. The staff change was not anticipated when the January Pasha-Petraeus meeting took place, both U.S. and Pakistani officials said.

The diplomatic furor threatens to halt the CIA’s drone program, which in the last eight years, has killed an estimated 2,223 Taliban, al-Qaida and other suspected militants with 289 strikes, peaking at 117 strikes throughout 2010, reducing al-Qaida’s manpower, firepower and reach, according to Bill Roggio at the Long War Journal website, which tracks the strikes. U.S. officials say his figures are fairly accurate, though they would not give more precise figures.

The strikes have markedly slowed to only 10 strikes in the opening months of this year, with the last in mid-March, Roggio said. That puts the program on pace for a total of 40-50 strikes for the year, less than the year before.

Roggio says the strikes so far this year seem to back up that report: out of the 10 strikes, two killed high-value targets, and another strike killed three mid-level Taliban leaders, with no large groups reportedly targeted by any of the drone’s missiles. In previous years, an average of only 5 percent to 10 percent of targets were deemed high value, with larger numbers of foot soldiers and a much lower percentage of commanders among those hit.

U.S. officials took issue with the interpretation that signature strikes had ceased, adding the “U.S. is conducting, and will continue to conduct, the counterterrorism operations it needs to protect the U.S. and its interests.” The CIA offered no official comment.

Other U.S. officials said no such concessions were offered, and insisted US counterterrorism actions continued as before.

In his opening salvo to keep the program going, Petraeus offered to give his Pakistani counterpart advance notice of the strikes, as had been the practice under the Bush administration, which launched far fewer strikes overall against militant targets.

The U.S. had stopped giving the Pakistanis advance notice, after multiple incidents of targets escaping, multiple senior U.S. counterterrorist officials say. U.S. intelligence intercepts showed Pakistani officials alerted local tribal leaders of impending action on their territory, and those leaders oftentimes in turn alerted the militants.

Petraeus also outlined how the U.S. had raised the threshold needed to take strikes, requiring his approval more often than in the past, the U.S. official said.

Pakistan’s military wants to go back to the “Reagan rules — the way the CIA operated with the ISI against the Soviets” inside Afghanistan, says former CIA officer Bruce Riedel, of the Brookings Institute. “We give them a big check, and they make every decision about how that is spent. Minimal American footprint in country, or involvement in actual fighting the bad guys.”

“We cannot trust the ISI to fight this war for us,” after finding bin Laden in a Pakistani military town, “showing the ISI was either clueless or complicit,” Riedel said.

On the web: www.longwarjournal.org/pakistan-strikes.php

 

S. Korea Determined To Shoot Down Rocket

Defense News

Mar. 26, 2012 – 10:56AM |

By JUNG SUNG-KI and WENDELL MINNICK |

 

SEOUL — The South Korean military is poised to shoot down a North Korean rocket should it stray into South Korean territory during a launch planned for next month, a defense ministry official here in Seoul said March 26.

The response comes as Seoul’s intelligence authorities confirmed that the main body of a North Korean Unha-3 rocket was moved to a new missile site at Dongchang-ri in northwestern North Korea. Dongchang-ri, in North Pyeongan Province, is significantly larger than the one in Musudan-ri, North Hamgyeong Province, where North Korea test-fired the Taepodong-2 missile in 2009.

The launch angle will be the first to take a southern trajectory and marks the first time a rocket has been launched from this facility.

North Korea plans to fire the rocket to put a satellite into orbit between April 12 and 16 to mark the 100th anniversary of the birth of its founding leader Kim Il-sung.

However, the international community believes the rocket launch is a disguised test of a long-range ballistic missile that could deliver a nuclear warhead to targets in Alaska and Hawaii, as well as other Asia-Pacific nations, such as Australia and Indonesia.

“We’re preparing measures to track the missile’s trajectory and shoot it down if, by any chance, it deviates from the planned route and falls into our territory,” Col. Yoon Won-shik, vice spokesman for the Ministry of National Defense, told reporters.

“North Korea’s move is a grave provocation and defiance against the international community,” Yoon said. “Through the rocket launch, we believe, the North is aimed at improving and developing its long-range ballistic missile to deliver nuclear warheads.”

According to South Korean Navy officials, two destroyers, including a 7,600-ton Sejong the Great-class Aegis destroyer, are to be deployed in the West Sea to monitor the missile launch.

If parts of the three-stage North Korean rocket stray from their trajectory and violate South Korea’s territory, the destroyers will fire SM-2 ship-to-air missiles to take them down, a Navy spokesman said.

“The Army will also be ready to fire PAC-2 missiles to intercept parts of the rocket if they fly too close to the South’s territory,” Yoon said. In 2007, South Korea purchased 48 secondhand PAC-2 missile systems from Germany as part of efforts to establish a low-tier air and missile defense system in coordination with ship-based SM-2 missiles and Israel’s Super Green Pine early warning radars.

The North Korean launch site has created new problems for South Korean efforts to intercept and monitor ballistic missiles and rockets. The angle from which the rocket will be launched will make it more difficult to intercept by seaborne ballistic-missile defense systems, said Bruce Bechtol, author of the book, “Defiant Failed State: The North Korean Threat to International Security.”

The new site at Dongchang-ri “can potentially be set up more quickly and denial/deception procedures implemented more effectively than the site at Musudan,” he said. “This impacts on warning time for U.S./[Republic of Korea] policy makers.”

Bechtol said the most important aspect of this launch will not be the southern trajectory, but “seeing if this missile can actually make it through all three stages successfully of the launch cycle.” If successful, this will mean that North Korea will finally have the capability to strike U.S. territory.

U.S. President Barack Obama has renewed his warning against North Korea’s provocations.

“There will be no rewards for provocations. Those days are over,” Obama said March 26 in Seoul. “To the leaders of Pyongyang, I say, this is the choice before you. This is the decision that you must make.”

Obama is attending the two-day Nuclear Security Summit in Seoul. Chinese President Hu Jintao and other world leaders are gathered in South Korea to discuss ways of improving nuclear security and to hold bilateral summit talks on the sidelines. North Korea and Iran are expected to dominate the nuclear agenda at the summit.

After a meeting between Obama and Hu, senior White House aide Ben Rhodes said the Chinese leader indicated to his U.S. counterpart that he took the North Korean nuclear standoff very seriously and was registering his concern with Pyongyang.

“The two leaders agreed to coordinate closely in responding to this potential provocation and registering our serious concern to the North Koreans and, if necessary, consider what steps need to be taken following a potential satellite launch,” Rhodes said.

Wendell Minnick reported from Taipei.

 

Technology testing could delay Air Force aerial tankers, GAO says

Nextgov

By Bob Brewin
03/26/2012

The Air Force and Boeing Co. risk falling behind schedule as they incorporate three advanced military technologies into the commercial 767 airliner that serves as the basis for the new KC-46 air tanker, the Government Accountability Office reported Monday.

The service awarded Boeing a $4.4 billion firm, fixed-price development contact for the KC-46A in Feb. 2011, and GAO said estimated costs have climbed $900 million since then. The Air Force plans to spend $51.7 billion to buy 179 KC-46 tankers between now and 2027.

“While designing a new tanker using a modified commercial platform is not as technically challenging as a more revolutionary weapon system, the program still faces some technical risks, including technologies that have not yet been demonstrated during flight,” GAO said.

Two of the three KC-46A technology developments are software based, according to GAO. The watchdog said these include an algorithm that provides information for automatic re-routing and construction of new flight paths. GAO described that algorithm as “new and novel technology, critical to meeting operational requirements.”

Boeing and the Air Force face other risks in developing related KC-46A threat correlation software — a new module that correlates tracks from multiple potential threats and automatically reroute the tanker’s flight path to avoid them, GAO said.

Boom operators — who control the refueling system contained in a boom extended from the rear of the aircraft — in the more than 50-year-old KC-135 and more than 30-year-old KC-10 tankers ride in the back of the aircraft and visually steer the boom through large windows located in its belly.

In the KC-46, the boom operator rides in the front of the aircraft and steers the boom based on visual cues from a three dimensional display system. Images of the aircraft on the screens are captured by a pair of cameras outside the vessel that are meant to replicate the binocular aspect of human vision by supplying an image from two separate points of view. This provides the boom operator with greater fidelity and a more realistic impression of depth, or a third dimension.

GAO said similar technology has been used on two foreign-operated refueling aircraft and a representative model now being tested with other Boeing tankers.

The new software and the 3-D viewing system “have not yet been demonstrated in a realistic environment, a higher level of maturity that is a best practice,” GAO said.

When projects begin development with technologies that are only at this level of testing they are likely to experience greater cost growth, according to GAO.

“There is broad agreement that KC-46 schedule risk is a concern,” the agency said in its report. “Significant concurrency, or overlap, among development and production activities add risk to the program.” GAO added the Air Force and Boeing have said there’s a moderate risk of schedule overruns.

Maj. Gen. Christopher Bogdan, KC-46 program executive officer and program director, said in a response appended to GAO’s report that due to the fact the KC-46 is based on a commercial airframe it has “nowhere near” as much development risk as other major aircraft development programs.

Bogdan did not address the three key technology concerns GAO raises in its report, but the Defense Department agreed with the agency’s recommendation to monitor the contract and developed lessons learned advice for future acquisition programs.

 

 

White House Sets Cybersecurity Priorities

Agencies should focus on safer Internet connections, continuous monitoring of systems, and authentication, says cybersecurity chief.

By Elizabeth Montalbano, InformationWeek
March 26, 2012
URL: http://www.informationweek.com/news/government/security/232700242

Obama Administration cybersecurity coordinator Howard Schmidt has set an agency-wide goal for agencies to implement priorities to help protect federal IT systems against cyberattack.

Schmidt is pushing agencies to achieve 95 percent utilization of critical administration cybersecurity capabilities on IT systems in the areas of Trusted Internet Connections (TIC), continuous monitoring, and strong authentication by 2014. The effort is part of the agency’s Cross-Agency Priority Goals initiative on Performance.gov, Schmidt said in White House blog post.

The White House set up the program as part of a broader government accountability effort to spur agencies to achieve performance outcomes on key administration goals in a variety of areas. The feds use the Performance.gov website to grade agencies on their improvement efforts.

Schmidt joined with experts from the Departments of Defense and Homeland Security and the National Institute for Standards and Office of Management and Budget to identify the three cybersecurity priorities with limited federal budgets in mind, giving agencies reasonable guidance for achieving improvements going forward.

“Federal departments and agencies must defend their information systems in a resource-constrained environment. Balancing system security and survivability while meeting numerous operational requirements requires robust risk management,” Schmidt said in the post.

He also provided guidance for how to improve in each of the priority areas:

–In the area of TIC, agencies should focus on consolidating external telecommunication connections and ensuring a set of baseline security capabilities for situational awareness and enhanced monitoring.

–To improve continuous monitoring, agencies should turn what’s historically been a static assessment of security controls into a more “dynamic risk mitigation program” to provide more real-time status updates and increase visibility into system operations.

–Finally, to foster strong authentication, agencies should step up their efforts to set up smartcard credential systems that provide multifactor authentication and digital signature and encryption capabilities for accessing federal IT systems and facilities.

The White House is aligning its latest cybersecurity guidance with these priorities to help agencies achieve their goals, according to Schmidt.

For instance, in the area of continuous monitoring, the federal government has now mandated that agencies submit a monthly security report through an online compliance tool called CyberScope. This mandate is a new requirement of the Federal Information Security Management Act (FISMA), NIST’s security standard for federal IT solutions.

The priorities also are a focus of an update to NIST’s recently released official federal cybersecurity standards.

 

North Korea refuses to halt launch plan despite Obama’s warnings


By the CNN Wire Staff

updated 5:28 AM EDT, Tue March 27, 2012

 

(CNN) — North Korea said Tuesday that it would not abandon its plan to carry out a satellite launch next month despite recent warnings from President Barack Obama over the move.

The North “will not give up the satellite launch for peaceful purposes, which is a legitimate right of a sovereign state and requirement essential for economic development,” Pyongyang’s official Korean Central News Agency reported, citing the country’s foreign ministry.

During his visit to South Korea this week, Obama has said that if North Korea moves forward with the launch — which Washington and Seoul say would breach U.N. Security Council resolutions through the use of a long-range missile — it will further deepen its isolation, damage relations with its neighbors and face additional sanctions that have already strangled the country.

The North Korean report Tuesday said that Obama’s stance “reflects his wrong conception” of the situation.

“The U.S. says that it has no hostility” toward North Korea, the news agency cited the ministry as saying. “But it has not yet departed from the inveterate conception of confrontation. That is why it regards the launch of a satellite for peaceful purposes as a launch of long-range missile.”

 

 

Pyongyang also appears to be acting on its expressed determination to press ahead with the launch. The regime moved a long-range rocket it plans to test fire to a launch pad Monday, a South Korean Defense Ministry official said.

A U.S. official said the United States had also seen signs the North Koreans are preparing to launch the rocket.

The defiant words and deeds from Pyongyang coincide with a two-day nuclear summit in Seoul that has brought together leaders from the United States, Russia, China and dozens of other nations to discuss how to deal with nuclear terrorism and how to secure the world’s nuclear material.

Overshadowing the summit’s message of international cooperation was North Korea’s initial announcement earlier this month that it plans to carry out the rocket-powered satellite launch in mid-April.

“Here in Korea, I want to speak directly to the leadership in Pyongyang. The United States has no hostile intent toward your country,” Obama said during a speech Monday in front of students at Seoul’s Hankuk University of Foreign Studies.

“But by now it should be clear, your provocations and pursuit of nuclear weapons have not achieved the security you seek. They have undermined it.”

South Korean President Lee Myung-bak said Monday his country would “thoroughly retaliate against North Korea” if provoked.

If the rocket is launched, South Korea is prepared to “track its trajectory,” said the Defense Ministry official, who did not want to be named.

“There are concerns that parts of the rocket may fall within South Korean territory,” he said. “If that were to happen it would threaten lives and cause damage to the economy. To guard against that, they (the military) will be tracking the orbit.”

Japan has said it is preparing missile defense systems ahead of the launch.

The rocket was moved to a launch pad in the northeastern portion of Dongchang-ri, a village in northwest North Korea, the South Korean official said.

Pyongyang has invited international space experts and journalists to witness the launch, which is timed to coincide with the hundredth anniversary of the birth of Kim Il Sung, North Korea’s founder and ruler between 1945 and 1994.

Kim Jong Un, the grandson of Kim Il Sung, became the new head of the secretive regime in December, following the death of his father, Kim Jong Il. The leadership transition has added to uncertainties about Pyongyang’s intentions.

Using ballistic missile technology is in violation of U.N. Security Council Resolution 1874 and against a deal North Korea struck with the United States earlier this month to suspend nuclear and missile tests in return for food aid.

The North Korean foreign ministry statement Tuesday said that during the talks on the deal, Pyongyang had “consistently maintained that a moratorium on long-range missile launch does not include satellite launch for the peaceful purposes.”

The planned launch was front and center in a meeting Monday between Obama and Hu Jintao, the president of China, a neighbor and ally of North Korea.

“The two leaders agreed to coordinate closely to this potential provocation, and registering our serious concerns,” Ben Rhodes, one of Obama’s deputy national security advisers, said after the meeting.

 

The two also agreed there is a broad view in the international community that a satellite launch would be destabilizing, he said.

Obama and Hu also discussed North Korea’s new leadership, “this being a sensitive time on the Korean peninsula, and this being a new leader who is going to take some lessons from what works and what doesn’t,” Rhodes said.

Beyond the flurry of statements about the satellite launch, meetings and speeches continued Tuesday at the nuclear security summit.

Obama and Ban Ki-moon, the secretary-general of the United Nations, both underlined the threat of nuclear terrorism and the need for international cooperation to prevent it.

CNN’s Jethro Mullen, Paula Hancocks, Tim Schwartz, Becky Brittain, and Pam Benson contributed to this report.

 

Analyst: Obama’s FY13 budget would mean 8 percent dip in contract spending

Federal Times

By SARAH CHACKO | Last Updated:March 27, 2012

Federal agencies plan to reduce contract spending by $60 billion next year, according to an analysis of President Obama’s 2013 budget plan by the market research firm Deltek.

Agencies included more than $700 billion in their 2013 budget requests for products and services that contractors will likely provide, which is about 8 percent below the $768 billion in contract spending that agencies budgeted for this year, Ray Bjorklund, Deltek’s chief knowledge officer, said at the firm’s annual budget outlook event Tuesday.

The biggest hit to contract spending is expected in aerospace and defense programs, where agencies cut about $40 billion — or 21 percent — for a contract spending projection of $148 billion in 2013, according to Bjorklund. The Army’s budget will be hit the hardest: Contract spending is projected to drop by $14 billion, or 38 percent, from this year’s level.

Contract spending in architecture and engineering services and construction also is projected to drop about 16 percent governmentwide, from $32 billion in 2012 to $27 billion in 2013, the analysis showed. Funds for investment products, such as machinery, furniture and information technology hardware and software, were also reduced by 14 percent, from $34 billion in 2012 to $29 billion in 2013.

Agencies are likely to redirect some of that money toward services to maintain existing infrastructure, Bjorklund said.

Despite these cuts, there are pockets of anticipated growth in contracting, Bjorklund said. NASA has requested $4 billion for research and development contracts, a 10 percent increase over 2012, and the Commerce Department has asked for $870 million in support services and equipment for the National Oceanic and Atmospheric Administration, an 18 percent increase over 2012, Bjorklund said.

Agencies requested $127 billion for other white-collar services, such as management and administrative support. The request reflects a 10 percent increase over what agencies received in 2012. The Army’s budget request of $20 billion for white-collar services shows the greatest growth compared with 2012 as the service prepares to train Afghan security forces, Bjorklund said.

Spending on medical services contracts is also expected to grow by about $5 billion to $75 billion in 2013, an increase of 7 percent over 2012. Most of the growth in medical services will come from the Office of Personnel Management, which is driven partly by health insurance programs for government employees, Bjorklund said.

NSA dismisses claims Utah Data Center watches average Americans

By Elizabeth Prann

Published March 28, 2012 | FoxNews.com

What would you think if someone told you personal emails, voicemails and web searches,basically your electronic footprint, could be viewed and stored by a government official?

The feds say that would never happen but some say it is, and by 2013 it will all be funneled into the Utah Data Center.

A more formal description of the center is the First Intelligence Community Comprehensive National Cyber-security Initiative Data Center.

And it’s huge. One million square feet, all to be filled with more technology and data storage than you could imagine.

It is not a stretch to say Utah is quickly becoming the data center capitol of the U.S., especially now that the state will be home to what some say is one the largest spy centers in the nation.

Plans for the facility are nothing new. Construction has been well underway for a year, and brainstorming for the facility began a decade ago. The NSA has been working with the U.S. Army Corps of Engineers on the estimated billion dollar data center.

So why is it making headlines now?

In a bombshell article published this month in WIRED Magazine, author James Bamford interviews a whistleblower claiming the NSA has software that searches domestic folks.

This software can reveal everything from target addresses, to web searches to social media sites to email and phone calls.

It claims any communication that looks suspicious are automatically red flagged. Basically, we’re all at risk for being watched.

The response from the NSA?

That’s simply, not true.

“Many allegations have been made about the planned activities of the Utah Data Center,” NSA public information officer Vanee’ Vines wrote in an email.

“What it will be is a state-of-the-art facility designed to support the Intelligence Community’s efforts to further strengthen and protect the nation. NSA is the executive agent for the Office of the Director of National Intelligence, and will be the lead agency at the center,” she said.

Her statement runs along the same lines as what NSA director general Keith Alexander said when he was questioned during a Congressional hearing last week by Georgia Rep. Hank Johnson.

Alexander repeatedly assured Rep. Johnson the agency has no interest or capability in eavesdropping on average Americans.

Vines statement is along the same lines. The NSA is building it to support the federal government’s effort to protect the nation’s cyber security.

“We are not going to dissect any particular news story – especially one that relies in part on the speculation of former officials and several unnamed sources,” Vines said. “Those of us who are privileged to be a part of this great institution, NSA, see firsthand that our nation is indeed becoming safer as a result of our hard work, dedication and the collaboration across the entire intelligence community.”

Keep in mind it was only weeks ago when Defense Secretary Leon E. Panetta was asked at a speech what worries him, personally.

A reporter asked the Panetta what keeps him up at night. The Defense Secretary responded swiftly – “a cyber attack.”



Read more: http://www.foxnews.com/politics/2012/03/28/nsa-dismisses-claims-utah-data-center-watches-average-americans/print#ixzz1qPtuOvWQ

 

Ohio’s energy economic infusion not without risks

Washington Post

Sunday, March 25, 2012

The rich Utica shale that runs beneath eastern Ohio holds vast quantities of natural gas, gas liquids and crude oil.

The help-wanted sign is out in Canton, Ohio, for Chesapeake Energy.

The company that has led the charge in shale gas drilling is looking for truck drivers with licenses for hazardous materials, a purchasing coordinator for oil field equipment, a pipeline technician, a field safety coordinator, administrative assistants, troubleshooting electricians, a tax analyst and more.

Chesapeake is mobilizing for a massive drilling and development campaign in the state. The company has spent $2.2 billion and amassed about 800,000 acres of leases in the rich Utica shale that runs underneath eastern Ohio. It has eight rigs running and will have 20 poking holes in the ground around Election Day. It plans to install 200 miles of pipeline this year to bring its bounty to market.

“We are very excited about the Utica,” Chesapeake’s chief executive, Aubrey McClendon, said in a Feb. 22 conference call.

That’s welcome news in Canton, where the unemployment rate peaked at 12.3 percent in January 2010 and where it was still running at 8.7 percent at the end of last year.

But shale drilling has also presented Ohioans with a dilemma. Chesapeake Energy is using the controversial combination of hydraulic fracturing and improved horizontal drilling methods to unlock vast quantities of natural gas, gas liquids and crude oil, driving down gas prices to 10-year lows. That has raised hopes for a revival of industrial and power plant uses of the gas.

In some cases, however, companies that haven’t drilled properly have contaminated water aquifers. And “fracking,” which uses about 4.5 million gallons of water – about 1,300 tank trucks full – to initially stimulate a well and get it flowing, has raised concerns about the disposal of toxic drilling waste; a Youngstown, Ohio, disposal well for fracking waste has been linked by seismologists to earthquakes. In December, the governor ordered the closure of several disposal wells in the area.

A Quinnipiac University opinion poll in January showed that 85 percent of Ohio voters believed that natural gas drilling would create jobs, and that by a 64-to-29 percent margin they believed that the economic benefits of drilling for natural gas outweigh the environmental concerns.

At the same time, voters said by a 72-to-23 percent margin that hydraulic fracturing should be suspended until there are further studies about its impact. They said by a 43-to-16 percent margin that fracking would damage the environment.

Support for a moratorium was strong among all groups, Quinnipiac said.

“Ohio can’t have it both ways,” Thomas Stewart, executive vice president of the Ohio Oil and Gas Association, said in a statement after the poll was released. “A ban on hydraulic fracturing would result in increased energy costs and bring oil-and-gas-related capital investment and job growth in Ohio to a grinding halt, just as the positive effects of both are currently reviving once-struggling communities throughout the state.”

“They want the jobs, but they’re very risk-averse,” said Peter Brown, assistant director of the Quinnipiac polling institute.

For GOP presidential candidates stumping here before the Super Tuesday primaries, the issue poses a quandary, as it does for the state’s Republican Gov. John Kasich, whose approval ratings have been low.

“The politics of the situation would probably make a governor, even a free market advocate like Kasich, want to make sure he’s dotted all the i’s and crossed all his t’s when it comes to safety,” Brown said.

Oil and gas drilling is nothing new in Ohio. In 1860, just a year after the first commercial discovery of oil in Pennsylvania, drillers found oil in Ohio near Macksburg. By 2010, there had been 275,774 wells drilled in the state, putting it in fourth place nationwide.

John D. Rockefeller established the Standard Oil Co. by cornering the Ohio refinery business and pipelines to drive down crude oil prices and capture retail profits.

The size of Ohio’s reservoirs have paled next to the giants in Texas, California and Alaska. There was an Ohio oil rush in the 1890s, and the state’s production peaked in 1896. High oil and gas prices after the 1979 price shock triggered a new drilling rush, and natural gas production peaked in 1984, when high prices provided incentives for drillers.

Today there are 64,378 active wells, most of them “stripper” wells producing fewer than 10 barrels of oil a day or less than 60,000 cubic feet a day of natural gas.

Hydraulic fracturing combined with horizontal drilling is a new twist, however. The horizontal drilling allows companies to snake through the shale layer rather than just tap a cross section. Water is shot into wells to crack the shale rock and release natural gas or oil, which then flows up the well. What makes Ohio particularly alluring is that many of its shale areas are “wet,” meaning they have crude oil or natural gas liquids – currently much more valuable than gas, which is unexpectedly abundant and cheap.

Suddenly, Ohio is on the verge of an old-fashioned oil and gas rush. In certain hot spots, sums exceeding $3,000 an acre have been paid for a five-year lease, according to the law firm Johnson & Johnson, which represents landowners. On the edges, companies are paying $1,000 an acre, it said. Some companies are offering more than the one-eighth landowner royalty that has been the standard in Ohio.

In January, the French oil giant Total paid $700 million for a 25 percent interest in part of Chesapeake’s Ohio acreage and pledged to pay $1.63 billion over seven years for drilling and well completion.

While big money is at stake, Ohioans fear their water is, too.

Fracking got a bad reputation in Ohio before the shale drilling boom started. Hydraulic fracturing can be used in vertical wells, too, and in 2007 Ohio Valley Energy “fracked” a well in Bainbridge Township. Later, a report by the Ohio Department of Natural Resources said the company disregarded a bad cement job and went ahead with the fracturing anyway. Pressure built up in the annular space around the drill pipe, and gas migrated vertically through natural fractures in the rocks, the report said.

On Dec. 12, natural gas was detected at the local police station and two days later in homes, some of which had wells in their basements.

Then early on the morning of Dec. 15, an explosion lifted the two-story home of Richard and Thelma Payne off its foundation. The Paynes, an elderly couple who had lived there since 1956, were “catapulted” from their bed, according to a lawsuit filed later. The Paynes were not hurt, but 19 homes were evacuated that morning.

Even though it was a vertical well, the issue of gas migrating from horizontal wells into aquifers through rock fractures is something that frightens many people living over shale prospects. So far, only 35 horizontal wells have been drilled in Ohio, according to state regulators.

Bainbridge Township trustee Jeff Markley says he’s “nervous” about the ramping up of shale drilling. He has attended seminars held by the Ohio Department of Natural Resources that make him feel better, but he says “I imagine that problem could happen again if the driller doesn’t follow the rules.”

“I think it’s a good idea to drill for natural gas, but I don’t think they should drill in neighborhoods with high-density population,” said Dale Markowitz, lawyer for the Paynes and their neighbors. Markowitz more often represents landowners, churches and golf courses that want to lease their mineral rights and get royalties.

“If you follow the rules in place, the odds are pretty low that you’ll have a problem,” he says. “But you could have a drilling disaster no matter what.”

Pennsylvania, where the Marcellus shale has already been heavily developed, has not provided a role model for Ohioans. Among the more than 4,200 shale gas wells drilled in Pennsylvania, there have been enough problems to provide fodder for Ohio’s drilling foes.

The most common problems occur on the surface. In February, for example, Pennsylvania fined Chesapeake $565,000 for sediment that washed off both an access road and a drilling pad into a river, damaging filters at a water treatment plant. According to the environmental group Clean Water Action, in 2010 there were 1,200 violations of environmental regulations by gas drillers in the Marcellus, a quarter of them from leaks or poor construction of waste pits for fluids that flow back to the surface after fracking.

Kasich has moved to balance the economic and safety issues. “You cannot degrade the environment at the same time you’re producing this industry. It is not acceptable,” he said in his state of the state address in February. “And it’s not a false choice. The biggest companies know that you need to have tough environmental rules. They can’t be complicated. They can’t be over the top, but we need to have them because we can’t have some yahoo come into the state and damage this whole industry because they’re irresponsible.”

Kasich spokesman Rob Nichols notes that the best drilling prospects, in the eastern and southeast regions, are in “that part of the state where people have been ignored far too long.” (Portions of the Utica shale also run beneath the Marcellus and into Pennsylvania.)

Many experts say that cheap natural gas in the southeast part of the state could also lure industries that rely heavily on gas. Shell Oil has said it would build a chemical plant in the area, but hasn’t decided which state. Pennsylvania, West Virginia and Ohio are all in the running; each is vying with generous packages of tax incentives, pipeline routes and permit expediting. An announcement is due in the next two weeks.

Meanwhile, the drillers are hiring. Jake Holland, a civil engineering major who graduated from the University of Akron in 2009, served tables and tended bar at restaurants until Chesapeake Energy hired him as a field technician eight months ago. Now he works with landowners and county engineers to figure out the best routes for trucks and locations for drilling pads. Sure of a regular paycheck, he hopes to buy a house in the next year or so.

“We cannot let our fears outweigh the potential,” Kasich said.

At the same time, Nichols says, Kasich is “looking at regulatory processes to make sure that when companies come in here they leave the place better than when they arrived.” The state proposed tighter regulations in October and Kasich has backed a tripling in the number of regulators – who now number two per county – and a tripling in their small budget.

Environmentalists say it’s a step in the right direction, but not enough. A group of organizations proposed 19 pages of technical revisions. The state issued a new set of regulations that environmental groups are still reviewing.

“They’ve put some good things in there, but they still have a long ways to go before they’re up to the highest standards,” said Brianna Mordick, a geologist working for the Natural Resources Defense Council. “This is an industrial process. You can reduce the risk of environmental problems, but you can never eliminate it.”

Mordick said “you have to have really strong regulations in place and then you have to be able to enforce those regulations. When exploration and development explodes, it’s really hard for these states to keep up. Relative to the number of wells, the number of inspectors is really small, and there’s no way for them to get to the sites with enough frequency.”

Meanwhile, the exploration companies are marching west from Pennsylvania.

Buried risks

So far, Ohio’s main role in the shale gas business has been to bury Pennsylvania’s waste.

Because of its geology, Pennsylvania has a limited number of waste disposal wells. So most of the state’s fracking chemicals and the nasty stuff sucked back up from the shale rock gets injected into what the Environmental Protection Agency calls class 2 wells in Ohio, which has 177 of them.

The D&L Energy well in Youngstown was one of them. D&L Energy began drilling it in July 2010 and waste injections began in December 2010. Three months later, the first earthquakes struck. Youngstown had never had one before.

John Armbruster, a seismologist with Columbia University’s Lamont-Doherty Earth Observatory who investigated the tremors, says that the well was 2.7 kilometers deep. It was supposed to dispose of waste in sandstone, which is a couple hundred million years old, right above the basement rock, which is a billion years old.

The waste is pumped in at extremely high pressures – high enough to lift the Empire State Building, Armbruster says. It’d be like having the weight of an SUV in the palm of your hand.

The D&L well happened to be right on top of, or very close to, a fault, he adds.

“I compare it to a hydraulic jack,” Armbruster said. “The pressure in the well is a thousand pounds per square inch. Put that over a piece of fault that’s 1 kilometer by 1 kilometer, and you have enough pressure to move a piece of the Earth.”

(D&L Energy’s chief executive, Ben Lupo, did not return phone calls seeking comment.)

On Dec. 30, the state ordered the shutdown of the well and a handful of others in the Youngstown area. On Dec. 31, a quake registered 4.0 on the Richter scale, 40 times the energy of earlier ones.

How will the politics of this play out? GOP presidential candidates have supported expanded oil and gas drilling. Obama, however, has been staking out a middle ground – more shale gas drilling, but with safeguards. While it might not please environmental groups or industry groups, it might be in line with Ohio’s uncertain voters.

“Ohio’s economy obviously needs jobs, and the consensus is that this will create jobs and bring tax revenue,” Quinnipiac’s Brown said. “So there’s an inclination to want to make sure that Ohio gets those economic benefits. This is also an electorate with a sense of risk averseness on anything that involves safety and environmental questions. The challenge for a politician is to walk that line.”

http://sfgate.com/cgi-bin/article.cgi?f=/c/a/2012/03/25/BUOA1NFVS9.DTL

This article appeared on page D – 1 of the San Francisco Chronicle

 

We’re No. 1 (and No. 3)! Surprising Facts About the U.S. and Oil

Dailyfinance.com

By Charles Hugh Smith

Posted 4:15PM 02/28/11

 

With the price of oil fluctuating around $100 per barrel, here’s a timely question: Where does America get its oil? Some of the answers might surprise you.

All of the data here are derived from the U.S. Department of Energy’s Energy Information Administration (EIA), which is treasure trove of data on global energy. For example, Libya has the largest proven reserves of oil in Africa.

Let’s start with America’s oil consumption, which is 18.8 million barrels per day (MBD), according to the EIA. That usage puts the U.S. atop the list of the world’s largest oil consumers by a wide margin. Indeed, U.S. demand is more than that of the next four nations combined: Japan, Russia and rising economic powers China and India:

 

U.S : 18.8 MBD

China: 8.3

Japan: 4.4

India: 3.1

Russia: 2.7

 

Who’s Pumping What

 

What countries are the top producers of oil? The answer provides an important context for any discussion of oil supply:

Russia: 9.9 MBD

Saudi Arabia: 9.7

U.S.: 9.1

 

Are you surprised that the U.S. is still the No. 3 producer of oil in the world? The U.S. produces roughly the same amount of oil as Canada, Mexico and the United Arab Emirates combined (No. 6, 7 and 8 on the top producers list). That means the U.S. supplies 48.6% of its consumption while it imports 51.4% of consumption, or 9.67 MBD, from oil-exporting nations.

Given that total global production of oil is 85.5 MBD and total consumption is 85.6 MBD, the U.S. by itself consumes 22% of the world’s oil supply.

 

You may have noticed in the above chart that the EIA lists both “crude oil production” and “total oil production” in the U.S. Crude oil production is 5.36 MBD, a number that has remained stable for the past few years. The EIA also has a chart that lists oil production by region and state. The data contain all sorts of interesting tidbits — for example, crude oil production in North Dakota has risen from 85,000 barrels a day in 2004 to 218,000 barrels a day in 2009. California still produces 567,000 barrels a day, and Texas pumps 1.1 million barrels day, about the same amount the U.S. imports from Saudi Arabia.

The EIA states that total U.S. production is 9.14 MBD, considerably more than the 5.36 MBD listed for crude oil alone. What’s the source of the discrepancy? The EIA lists other sources, such as “Natural gas plant liquids.”

 

Geopolitical Diversity

So where does the U.S. get the 9.67 million barrels a day of oil we import? It turns out our biggest suppliers are North American neighbors.

From:

OPEC: 4.67 MBD

The Persian Gulf: 1.67

Non-OPEC: 5

 

From:

Canada: 2 MBD

Mexico: 1.2

Saudi Arabia: 1

Nigeria: 1

Venezuela: 0.82

Iraq: 0.33

Angola: 0.30

Brazil: 0.27

 

The U.S. imports oil from a geopolitically diverse array of nations: The Persian Gulf, Africa, South America and North America. Interestingly, the U.S. draws relatively little oil from the world’s top producer, Russia.

 

The silver lining, if there is one, in the trend toward higher oil prices is that America obtains a modest percentage of its imported oil imports from countries, such as Libya, experiencing political turmoil and instability. However, the U.S. relies on other nations for more than half its oil supplies, a potentially unstable dependence in an increasingly unstable world.

http://www.dailyfinance.com/2011/02/28/surprising-facts-about-us-and-oil/

 

 

White House Shares $200 Million Big Data Plan

Obama administration announced plans to fund R&D push for big data analysis, for applications ranging from health to defense.

By J. Nicholas Hoover
InformationWeek
March 29, 2012 04:02 PM

The Obama administration on Thursday announced plans to spend hundreds of millions of dollars on a new “big data initiative” for research and development into technology to “access, store, visualize, and analyze” massive and complicated data sets.

The initiative comes as volumes of data used by government and the private sector expand exponentially. It includes commitments from several federal agencies to develop new technologies to manipulate and manage big quantities of data and use those technologies in science, national security, and education. John Holdren, director of the White House’s Office of Science and Technology Policy, compared the effort to federal research that led to breakthroughs in supercomputing and to the development of the Internet.

“While the private sector will take the lead on big data, we believe that the government can play an important role, funding big data research, launching a big data workforce, and using big data approaches to make progress on key national challenges,” Holdren said in a press conference to announce the effort. The government is also helping to set big data standards.

The federal agencies working on the initiative will be the National Science Foundation, the National Institutes of Health, the Department of Defense, the Department of Energy, and the U.S. Geological Survey.

Among the big data projects will be a joint solicitation from the National Science Foundation and the National Institute for Health, which will award a up to $25 million in funding for 15 to 20 research projects that, according to the solicitation, will “advance the core scientific and technological means of managing, analyzing, visualizing, and extracting useful information from large, diverse, distributed, and heterogeneous data sets.”

In addition to the big data solicitation, the National Science Foundation is also implementing a long-term big data strategy that includes encouraging research, funding a $10 million data project at the University of California, Berkeley, support for a geosciences data effort called Earth Cube, and more.

The Department of Defense, meanwhile, plans to spend about $250 million annually, including $60 million on new research projects, on big data. The Defense Advanced Research Projects Agency is creating the XDATA program, a $100 million effort over four years to “develop computational techniques and software tools for sifting through large structured and unstructured data sets.”

The National Institutes of Health announced as part of the effort that it has placed 200 Tbytes of genomic data–the world’s largest set of human genetic data, according to the White House–on Amazon Web Services as part of the international 1000 Genomes Project.

The Department of Energy is no stranger to big data, being home to some of the most powerful supercomputers in the world. As part of the big data initiative, the agency’s Lawrence Berkeley National Laboratory will spend $25 million to create a new research facility, the Scalable Data Management, Analysis, and Visualization Institute.

In a blog post accompanying the announcement, OSTP deputy director Tom Kalil called on industry, universities, and non-profit organizations to join the administration in its efforts.

For their part, technology companies applauded the effort. “The administration’s work to advance research and funding of big data projects, in partnership with the private sector, will help federal agencies accelerate innovations in science, engineering, education, business and government,” said David McQueeney, VP of software for IBM Research.

http://www.informationweek.com/news/government/info-management/232700522

 

NTIA’s $18 billion spectrum reallocation plan would force battlefield networks to move

NextGov

By Bob Brewin
03/29/2012

The National Telecommunications and Information Administration on Tuesday said it would cost $18 billion to reallocate 95MHz of federal spectrum for commercial use and take at least a decade to complete. NTIA released the findings in a detailed analysis of the plan.

The agency wants to move more than than 3,100 individual frequency assignments of 20 federal agencies out of the 1755-1850 MHz frequency band to other bands. It will pay for this shift from auctions of spectrum to commercial providers. “Current law requires that auction proceeds exceed expected federal relocation costs. Since federal relocation costs are expected to be high, any repurposing option needs to promote economic value while ensuring no loss of critical federal capabilities,” NTIA said.

The reallocation plan calls for shifting two key Army battlefield network systems, Warfighter Information Network-Tactical and the backpack version of the Joint Tactical Radio System, the agency said in an appendix to the analysis.

The JTRS backpack radio operates in the 1350-1390 MHz and 1755-1850 MHz bands slated for commercial use. NTIA said a planned move to frequencies above 2 GHz access to spectrum “is not achievable without significant engineering modifications.” WIN-T, which operates in the 1350-2690 MHz range, can be retuned and shifted to the 2 GHz band, according to the agency

The Air Force operates satellite control links in the 1755-1850 MHz band. Due to the long orbit life of some satellites, control link frequencies could not be moved until 2045, NTIA said.

The Defense Department estimated it would spend $2.35 billion to move satellite control operations out of the 1755-1850 MHz band. Defense also operates small unmanned aerial systems in this band, and NTIA estimated it would cost $1.4 billion to switch those frequencies.

The Homeland Security, Justice and Treasury departments operate video surveillance systems in the band slated for commercial use, and NTIA put the cost of moving those systems at approximately $4.9 billion.

DHS also uses the band to support security enforcement along the border between Texas and Mexico. Several point-to-point microwave systems provide a critical link for the various communications and surveillance technologies DHS field personnel use. These wireless systems route communications media, voice from tactical radios, video imagery from remote surveillance systems, and data from ground sensors to the key decision-makers in Central Command and communications centers, NTIA said.

Some of the federal systems in this band may require more than a decade to relocate, which could further complicate deployment of commercial services.

NTIA is proposing that both spectrum sharing and reallocation to accommodate federal and commercial users.

http://www.nextgov.com/nextgov/ng_20120329_2242.php?oref=rss

 

 

Economists offer more pessimistic view on manufacturing in upcoming report

Washington Post

By Peter Whoriskey, Published: March 19

 

During the 2000s, as U.S. manufacturing was transformed by devastating job losses, prominent economists and presidential advisers offered comforting words.

The paring of the manufacturing workforce, which shrank by a third over the decade, actually represented good news, they said. It meant that U.S. workers and factories had become more efficient and that, as a result, manufacturing companies needed fewer people.

“What happened to manufacturing? In two words, higher productivity,” Robert Reich, who served as labor secretary in the Clinton administration, wrote in 2009.

“The decline in U.S. manufacturing employment is explained by rapid growth in manufacturing productivity over the past 50 years,” said Glenn Hubbard, chairman of the Council of Economic Advisers (CEA) under President George W. Bush.

But a handful of economists are challenging that explanation, chipping away at the long-offered assurances that the state of U.S. manufacturing is not as bad as employment numbers make it look.

Instead, they say, it’s significantly worse.

What caused the job losses, in their view, is less the efficiency of U.S. factories than the failure of those factories to hold their own amid global competition and rising imports. The apparent productivity gains reflected in the official U.S. statistics have been miscalculated and misrepresented, they say, a position that has been at least partially validated by recent research.

“I bought into this idea for a long time that it was superior labor productivity that caused most manufacturing job losses,” said Rob Atkinson of the Information Technology and Innovation Foundation, a nonpartisan think tank. “Then I began to dig into the numbers.”

The arguments, which get a fresh airing in a report to be issued this week by the think tank, are being mounted as economists and politicians on the presidential campaign trial debate what, if anything, to do to help the nation’s manufacturers. Among the options are tax incentives, trade assistance and education credits.

“These numbers have been tossed about to say, ‘Look how productive U.S. factories have been,’ ” said Susan Houseman, senior economist at the W.E. Upjohn Institute, co-author with three Federal Reserve economists of a paper that raises questions about the accuracy of the productivity numbers. “The reality is a lot more complex and not as flattering.”

As calculated by federal statisticians, the productivity growth of U.S. factories has seemed quite impressive. Between 1991 and 2011, productivity more than doubled, meaning that a single worker today produces what two did 20 years ago, according to Bureau of Labor Statistics figures.

Looking at this number, many economists have concluded that the loss of manufacturing work could be considered a success story. Just as farming became more efficient over the previous century and fewer Americans found jobs on farms, U.S. manufacturing is simply becoming more efficient, as economists such as N. Gregory Mankiw, CEA chairman under Bush, and Austan Goolsbee, a recent CEA chief under President Obama, have argued.

“It is exactly the same process that agriculture went through,” Goolsbee said in a 2006 speech.

While many Americans blamed free trade for much of the unemployment in the Rust Belt, the idea that productivity was the culprit amounted to orthodoxy among many mainstream economists.

But as Houseman, Atkinson and others have pointed out, that view may mask a far more complex reality.

For starters, the reported productivity gains may be overstated because the statistics the government collects do not adequately reflect the changes that have come with globalization, as Houseman and Federal Reserve colleagues Christopher Kurz, Paul Lengermann and Benjamin Mandel have shown.

Calculating labor productivity depends on determining the value of U.S. manufacturing output and dividing it by the number of manufacturing-worker hours.

But in a time when factories increasingly have turned to outsourcing, it can be difficult to determine what is U.S. manufacturing output and what should be properly counted as output from a foreign factory.

Critically, Houseman and others have shown that the price savings that U.S. factories have realized from outsourcing have incorrectly shown up as gains in U.S. output and productivity.

This bias may have accounted for as much as half of the growth of U.S. manufacturing output from 1997 to 2007, excluding computers and electronics manufacturing, Houseman and her co-authors have estimated.

For example, suppose a U.S. factory decides to offshore the production of a part for which it used to pay $1. With the switch to an overseas supplier, it might pay 50 cents for the part. If U.S. statistics do not capture this drop in price, the savings by the U.S. factory can show up as a gain in output and productivity.

The federal statistical agencies, which have helped fund Houseman’s work, agree that the bias exists, though they say there might be other problems that are offsetting.

“Figuring out where the productivity gains actually happened can be difficult,” said Brent Moulton, associate director for national economic accounts at the Bureau of Economic Analysis, which generates the output figures used in calculating commonly used measures of labor productivity.

On the agency’s Web site, a new message says the bias could account for overestimating the nation’s gross domestic product and productivity by one-tenth or two-tenths of a percentage point.

But there may be another, broader problem with the manufacturing output and productivity figures.

Those numbers lump all manufacturing together when there are actually two very different trends afoot.

Between 2000 and 2010, manufacturing output of computer and electronic products rose at a remarkable rate of almost 18 percent per year.

Over the same period, output in the rest of U.S. manufacturing remained roughly flat, according to Bureau of Economic Analysis figures tallied by Houseman. That’s a dismal showing for a decade.

It is only when computer and electronic products are included that overall manufacturing output registers the impressive increases. Though it represents 15 percent or less of manufacturing output, the sector’s strong growth makes the rest of U.S. manufacturing seem much more robust than it really is.

Moreover, as the critics point out, there are reasons to question what the remarkable growth in computer and electronics production really means.

For one thing, much of the nation’s production of computers and electronics has moved overseas. The number of consumer electronics shipped from U.S. factories dipped about 70 percent between 2000 and 2010, according to the Census Bureau’s Current Industrial Report.

Moreover, at least some of the productivity gains shown in U.S. computer manufacturing reflect the increasing power and decreasing prices that come with innovation. When a computer chip doubles in efficiency, that can turn up in a doubling of output and productivity in computer manufacturing. But that is not what is ordinarily thought of as manufacturing efficiency.

“It is innovation that makes it look like they’re manufacturing a whole lot more in the U.S. than they really are,” Atkinson said.

Finally, the critics note that high productivity should not necessarily lead to job losses. During the 1990s, for example, when reported U.S. productivity was also high, the job losses in manufacturing were slight compared with the drama of the past decade, when employment in the sector dropped from 17.2 million to 11.6 million, according to Bureau of Labor Statistics figures.

When pressed, economists on both sides of the debate about the health of U.S. manufacturing acknowledge that it amounts to a matter of emphasis. But the differences between the two camps are large enough to alter how one should view the future of U.S. manufacturing.

Atkinson and other critics of the productivity story concede that indeed, some of the job loss was caused by increasing productivity. Many factories are filled with automated machines that are far more efficient, after all, so one worker can account for far more output. But he thinks the primary cause of the job loss was that U.S. manufacturers lost their edge amid increasing world competition. Based on that view, he favors changes to government policy that he says could return millions of manufacturing jobs to the United States.

Similarly, Goolsbee says there are many reasons manufacturing has shed so many jobs. And Reich, now at the University of California at Berkeley, says that while he has recently focused on productivity as an explanation of the job losses, “I don’t think there’s any question that outsourcing has played an important role. The question is, what role?”

Regardless of what caused the job loss, Reich said, “it’s difficult to see a huge number of jobs coming back in manufacturing.”

 

http://www.washingtonpost.com/business/economy/economists-offer-more-pessimistic-view-on-manufacturing-in-upcoming-report/2012/03/19/gIQAKSpZNS_story.html?wpisrc=al_economy_b

 

Sprint cancels LightSquared LTE deal

The end of the planned 15-year partnership leaves LightSquared without its main network partner

By Stephen Lawson

March 16, 2012 01:02 PM ET

 

IDG News Service – Sprint Nextel has terminated its 15-year spectrum-hosting agreement with LightSquared, eliminating the would-be wholesale mobile operator’s main carrier partnership even as the U.S. Federal Communications Commission seeks to shut down its network plans.

The deal, which the companies announced last July, called for Sprint to host LightSquared’s controversial 1.6GHz spectrum on its Network Vision infrastructure, in effect letting LightSquared piggyback on Sprint’s network and save itself $13 billion over eight years. For this, LightSquared was to pay Sprint $9 billion in cash and grant it $4.5 billion worth of credits to use LightSquared’s spectrum for its own services.

Sprint extended the partnership twice from its original termination date at the end of last year, and the deal expired Thursday. In a statement Friday, the mobile carrier cited the unresolved GPS (Global Positioning System) interference issues that have prevented LightSquared from receiving FCC approval for its LTE network. On Feb. 14, the FCC proposed steps that would shut down LightSquared’s hybrid satellite-LTE network plan.

“We remain open to considering future spectrum hosting agreements with LightSquared, should they resolve these interference issues, as well as other interested spectrum holders,” Sprint said. Sprint has returned $65 million in prepayments that LightSquared made to cover costs that Sprint never incurred. The companies had halted deployment design and implementation on the project late last year.

Sprint said its Network Vision rollout remains on schedule, with a Sprint LTE service launch set for the middle of this year, and the termination of the deal won’t have a material effect on its finances.

In a statement, LightSquared said the cancellation was in the best interest of both parties, given the regulatory delays. The change will enhance LightSquared’s working capital and give it more flexibility, the company said.

“Sprint has been a valued partner to LightSquared and we look forward to working together in the future,” said Doug Smith, chief network officer and interim co-CEO, in the statement.

The deal with LightSquared was one component of a complex plan by Sprint to bring together enough spectrum to offer a robust LTE service. Sprint also plans to reuse spectrum currently devoted to other services and to coordinate its LTE plans with Clearwire, in which it is the biggest shareholder.

Clearwire currently supplies Sprint’s WiMax 4G network, and it plans to build its own LTE network. Since the FCC acted to shut down LightSquared’s LTE plans, two wholesale customers of the startup have signed agreements with Clearwire.

http://www.computerworld.com/s/article/9225276/Sprint_cancels_LightSquared_LTE_deal?source=CTWNLE_nlt_mobilew_2012-03-19

 

GOP budget would mean five-year pay freeze for federal workers

Washington Post

By Joe Davidson, Published: March 20

“Another day another dollar workin’ my whole life away

The boss told me I’d get paid weakly and that’s exactly how I’m paid”

Federal employees might consider this old country tune as their anthem if the spending plan proposed Tuesday by House Budget Committee Chairman Paul Ryan (R-Wis.) becomes law.

The House Republican budget he presented would extend the current federal pay freeze by three additional years, for a total of five years with no boost in basic pay rates. His budget also would shrink the federal workforce by more than 200,000 positions and require employees to pay an undefined “more equitable contribution to their retirement plans.”

Dubbed “The Path to Prosperity: A Blueprint for American Renewal,” the Republican plan would cost federal workers $368 billion over 10 years.

“The reforms called for in this budget aim to slow the federal government’s unsustainable growth and reflect the growing frustration of workers across the country at the privileged rules enjoyed by government employees,” the blueprint says. “They reduce the public-sector bureaucracy, not through layoffs, but via a gradual, sensible attrition policy. By 2015, this reform would result in a 10 percent reduction in the federal workforce.”

Speaking of “growing frustration,” that’s a good description of the feelings many federal workers and their organizations have toward the steady flood of Republican efforts to cut federal pay and benefits. More than 20 such congressional proposals have been introduced.

“Federal employees already have had their pay frozen for two consecutive years, an unprecedented action that will save the government $60 billion over 10 years,” said John Gage, president of the American Federation of Government Employees. “And new federal employees [with less than five years of previous federal service] will pay four times as much in retirement contributions, saving taxpayers an additional $15 billion. That’s a total of $75 billion in savings.

“It is fundamentally wrong for federal employees to be required, yet again, to serve as the Automated Teller Machine for the nation. Enough is enough.”

The National Treasury Employees Union (NTEU) said the budget plan threatens “severe cuts” in federal service.

“It is almost as though the authors of this budget don’t know, don’t understand or don’t care about the key role federal employees play in helping keep our nation safe, ensuring that our food and medicines are safe and effective, that our air and water are safe, and performing so many other services that people not only expect and want, but need, as well,” said NTEU President Colleen M. Kelley.

On that point, the National Active and Retired Federal Employees Association (NARFE) noted the impact that Ryan’s proposed cuts would have on security-related departments.

“It’s worth remembering,” said NARFE President Joseph A. Beaudoin, “that losing one in 10 federal workers means losing 100,000 employees at the U.S. Departments of Defense, Veterans Affairs, Justice and Homeland Security.”

My colleague Eric Yoder (who blogged about the GOP budget) pointed out that those departments actually account for more than 1.3 million employees, meaning a 10 percent cut potentially could hit 130,000 staffers.

As Eric noted, the budget plan also would order six House committees to achieve savings in programs under their jurisdiction, in order to avoid across-the-board cuts that could otherwise occur under last year’s debt-ceiling deal. For the Oversight and Government Reform Committee, the Ryan plan suggests “making pensions for federal workers more like those for workers in the private sector.”

In a March 9 letter to Ryan, that committee’s chairman, Darrell Issa (R-Calif.), said that while he supports the current pay freeze, the oversight panel “does not believe that a permanent freeze on civilian pay is sustainable or desirable.”

Instead of a never-ending freeze, Issa said his committee will “consider legislation to establish a total compensation system that is market and performance sensitive.” He offered no details.

Issa and the White House used similar language in opposing a permanent freeze. But President Obama went a step further and proposed a 0.5 percent federal pay raise beginning in 2013.

In a letter responding to Obama’s budget proposal, Democrats on the oversight committee said such a small raise “may not be enough to allow middle-class federal workers to keep pace with the increasing price of goods and services.”

http://www.washingtonpost.com/politics/gop-budget-would-mean-5-year-pay-freeze/2012/03/20/gIQAtteBQS_story.html?wprss=rss_politics

 

ISPs commit to new cybersecurity measures

The recommendations from an FCC advisory committee target botnets, domain name fraud and Internet route hijacking

By Grant Gross

March 22, 2012 03:06 PM ET

IDG News Service – A group of U.S. Internet service providers, including the four largest, have committed to taking new steps to combat three major cybersecurity threats, based on recommendations from a U.S. Federal Communications Commission advisory committee.

The ISPs, including AT&T, Comcast, Time Warner Cable and Verizon Communications, committed Thursday to implement measures to fight botnets, domain name fraud and Internet route hijacking. The FCC’s Communications, Security, Reliability, and Interoperability Council (CSRIC) adopted the recommendations for voluntary action by ISPs the same day.

Eight wired and wireless ISPs, representing about 80% of the broadband subscribers in the U.S., are members of CSRIC and signed on to the recommendations.

“These actions will have a significant positive impact on Internet security,” FCC Chairman Julius Genachowski said. “If you own a PC, you’ll be significantly better protected against your computer [being] taken over by a bad actor, who could destroy your private files or steal your personal information. If you shop or bank online, you’ll be significantly better protected against being directed to an illegitimate website and having your credit card number stolen.”

The recommendations preserve the open architecture of the Internet and protect Internet users’ privacy, Genachowski said.

The CSRIC recommendations embraced by the ISPs include an antibot code of conduct. ISPs agreed to educate customers about botnets and to take steps to identify botnet activity on their networks. ISPs will also warn customers about botnet infections on their computers and offer assistance to customers with compromised computers, under the code of conduct.

The ISPs also committed to implement a set of best practices to secure the Internet’s Domain Name System by implementing DNSSEC, a set of secure protocol extensions designed to prevent DNS spoofing.

CSRIC also recommended that the Internet industry develop an Internet Protocol-route highjacking framework, including new technologies and practices to limit the number of times that Internet traffic is misdirected.

T-Mobile USA, one of the ISPs signing on to the recommendations, called cybersecurity an “extremely important issue.” The company supports voluntary, industrywide deployment of DNSSEC, T-Mobile said in a statement.

ISPs will need help from other Internet companies to implement the security measures, said Bob Quinn, AT&T’s senior vice president for federal regulatory affairs.

“DNSSEC is predicated upon a chain of trust across the Internet,” he wrote in a blog post. “[CSRIC] recommends that key industry segments such as banking, healthcare and others sign their respective domains and that software developers, such as web-browser developers, study how and when to incorporate DNSSEC validation functions into their software.”

The botnet recommendations see a “significant role” for other companies, including security software vendors and operating system developers, he added. “Keeping the Internet safe for consumers to browse, transact business and communicate is an important objective not only for AT&T but any other business that operates online,” he wrote.

http://www.computerworld.com/s/article/9225446/ISPs_commit_to_new_cybersecurity_measures?source=CTWNLE_nlt_security_2012-03-23&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+computerworld%2Fs%2Ffeed%2Ftopic%2F17+%28Computerworld+Security+News%29

 

Cyber and Drone Attacks May Change Warfare More Than the Machine Gun

By Ross Andersen

Reuters.

 

The new look of drone-enabled war.

From state-sponsored cyber attacks to autonomous robotic weapons, twenty-first century war is increasingly disembodied. Our wars are being fought in the ether and by machines. And yet our ethics of war are stuck in the pre-digital age. 

We’re used to thinking of war as a physical phenomenon, as an outbreak of destructive violence that takes place in the physical world. Bullets fly, bombs explode, tanks roll, people collapse. Despite the tremendous changes in the technology of warfare, it remained a contest of human bodies. But as the drone wars have shown, that’s no longer true, at least for one side of the battle.

 

Technological asymmetry has always been a feature of warfare, but no nation has ever been able to prosecute a war without any physical risk to its citizens. What might the ability to launch casualty-free wars do to the political barriers that stand between peace and conflict? In today’s democracies politicians are obligated to explain, at regular intervals, why a military action requires the blood of a nation’s young people. Wars waged by machines might not encounter much skepticism in the public sphere.

We just don’t know what moral constraints should apply to these new kinds of warfare. Take the ancient, but still influential, doctrine of Just War Theory, which requires that war’s destructive forces be unleashed only when absolutely necessary; war is to be pursued only as a last resort and only against combatants, never against civilians.

But information warfare, warfare pursued with information technologies, distorts concepts like “necessity” and “civilian” in ways that challenge these ethical frameworks. An attack on another nation’s information infrastructure, for instance, would surely count as an act of war. But what if it reduced the risk of future bloodshed? Should we really only consider it as a last resort? The use of robots further complicates things. It’s not yet clear who should be held responsible if and when an autonomous military robot kills a civilian. 

These are the questions that haunt the philosophers and ethicists that think deeply about information warfare, and they will only become more pertinent as our information technologies become more sophisticated. Mariarosaria Taddeo, a Marie Curie Fellow at the University of Hertforshire, recently published an article in Philosophy & Technology called “Information Warfare: A Philosophical Perspective” that addresses these questions and more. What follows is my conversation with Taddeo about how information technology is changing the way we wage war, and what philosophy is doing to catch up. 

 

How do you define information warfare?

Taddeo: The definition of “information warfare” is hotly debated. From my perspective, for the purposes of philosophical analysis, it’s best to define information warfare in terms of concrete forms, and then see if there is a commonality between those forms. One example would be cyber-attacks or hacker attacks, which we consider to be information warfare; another example would be the use of drones or semi-autonomous machines. From those instances, to me, a good definition of information warfare is “the use of information communication technologies within a military strategy that is endorsed by a state.” And if you go to the Pentagon they will speak about this in different ways, they put it under different headings, in terms of information operations or cyber warfare, cyber attacks, that sort of thing. 

Was Russia’s attack on Estonia in 2007 the first broad-based state example of this?

Taddeo: The attack on Estonia is certainly one example of it, but it’s only one instance, and it’s not the first. You could, for example, point to the SWORDS robots that were used in Iraq several years prior to the attack on Estonia, or the use of predator drones, etc. Remember information warfare encompasses more than only information communication technologies used through the web; these technologies can be used in several different domains and in several different ways. 

But it’s hard to point to a definitive first example of this. It goes back quite a ways and these technologies have been evolving for sometime now; remember that the first Internet protocols were developed by DARPA—in some sense, these technologies were born in the military sphere. Turing himself, the father of computer science, worked mainly within military programs during the Second World War.

Interesting, but do I understand you correctly that you distinguish this new kind of information warfare from pre-internet information technologies like the radio and the telegraph? 

Taddeo: Well those are certainly information technologies, and to some extent information has always been an important part of warfare, because we have always wanted to communicate and to destroy our enemies’ information structures and communication capabilities. What we want to distinguish here is the use of these new kinds of information communication technologies, because they have proved to be much more revolutionary in their effects on warfare than previous technologies like telegraphs or telephones or radios or walkie-talkies. 

What’s revolutionary about them is that they have restructured the very reality in which we perceive ourselves as living in, and the way in which we think about the concepts of warfare or the state. Take for example the concept of the state: we currently define a state as a political unit that exercises power over a certain physical territory. But when you consider that states are now trying to also dominate certain parts of cyberspace, our definition becomes problematic because cyberspace doesn’t have a defined territory. The information revolution is shuffling these concepts around in really interesting ways from a philosophical perspective, and more specifically, from an ethical perspective. 

In your paper you mention the use of robotic weapons like drones as one example of the rapid development of information warfare. You note that the U.S. government deployed only 150 robotic weapons in Iraq in 2004, but that number had grown to 12,000 by 2008. Is this a trend you expect to continue?

Taddeo: I expect so. There are several ways that the political decisions to endorse or deploy these machines are encouraged by the nature of these technologies. For one they are quite a bit cheaper than traditional weapons, but more importantly they bypass the need for political actors to confront media and public opinion about sending young men and women abroad to risk their lives. These machines enable the contemplation of military operations that would have previously been considered too dangerous for humans to undertake.  From a political and military perspective, the advantages of these weapons outweigh the disadvantages quite heavily.

But there are interesting problems that surface when you use them; for instance, when you have robots fighting a war in a foreign country, the population of that country is going to be slow to gain trust, which can make occupation or even just persuasion quite difficult. You can see this in Iraq or Afghanistan, where the populations have been slower to develop empathy for American forces because they see them as people who send machines to fight a war.

“Populations have been slower to develop empathy for American forces because they see them as people who send machines to fight a war.”

But these shortcomings aren’t weighty enough to convince politicians or generals to forgo the use of these technologies, and because of that I expect this trend towards the use of robotic weapons will continue. 

You note the development of a new kind of robotic weapon, the SGR-A1, which is now being used by South Korea to patrol its border with North Korea. What distinguishes the SGR-A1 from previous weapons of information warfare?

 

Taddeo: The main difference is that this machine doesn’t necessarily have a human operator, or a “man in the loop” as some have phrased it. It can autonomously decide to fire on a target without having to wait for a signal from a remote operator. In the past drones have been tele-operated, or if not, they didn’t possess firing ability, and so there was no immediate risk that one of these machines could autonomously harm a human being. The fact that weapons like the SGR-A1 now exist tells us that there are questions that we need to confront. It’s wonderful that we’re able to save human lives on one side, our side, of a conflict, but the issues of responsibility, the issue of who is responsible for the actions of these semi-autonomous machines remain to be addressed. 

Of course it’s hard to develop a general rule for these situations where you have human nature filtered through the actions of these machines; it’s more likely we’re going to need a case-by-case approach. But whatever we do, we want to push as much of the responsibility as we can into the human sphere. 

In your paper you say that information warfare is a compelling case of a larger shift toward the non-physical domain brought about by the Information Revolution. What do you mean by that?

Taddeo: It might make things more clear to start with the Information Revolution. The phrase “Information Revolution” is meant to convey the extraordinary ways that information communication technologies have changed our lives. There are of course plenty of examples of this, including Facebook and Twitter and that sort of thing, but what these technologies have really done is introduce a new non-physical space that we exist in, and, increasingly, it’s becoming just as important as the offline or physical space—in fact events in this non-physical domain often affect events in the physical world. 

Information warfare is one way that you can see the increasing importance of this non-physical domain. For example, we are now using this non-physical space to prove the power of our states—we are no longer only concerned with demonstrating the authority of our states only in the physical world. 

In what ways might information warfare increase the risk of conflicts and human casualties?

Taddeo: It’s a tricky question, because the risks aren’t yet clear, but there is a worry that the number of conflicts around the world could increase because it will be easier for those who direct military attacks with the use of these technologies to do so, because they will not have to endanger the lives of their citizens to do so. As I mentioned before, information warfare is in this sense easier to wage from a political perspective. 

It’s more difficult to determine the effect on casualties. Information Warfare has the potential to be blood-free, but that’s only one potentiality; this technology could just as easily be used to produce the kind of damage caused by a bomb or any other traditional weapon—just imagine what would happen if a cyber-attack was launched against a flight control system or a subway system. These dangerous aspects of information warfare shouldn’t be underestimated; the deployment of information technology in warfare scenarios can be highly dangerous and destructive, and so there’s no way to properly quantify the casualties that could result. This is one reason why we so badly need a philosophical and ethical analysis of this phenomenon, so that we can properly evaluate the risks. 

Part of your conception of information warfare is as an outgrowth of the Information Revolution. You draw on the work of Luciano Floridi, who has said that the Information Revolution is the fourth revolution, coming after the Copernican, Darwinian and the Freudian revolutions, which all changed the way humans perceive themselves in the Universe. Did those revolutions change warfare in interesting ways? 

Taddeo: That’s an interesting question. I don’t think those revolutions had the kind of impact on warfare that we’re seeing with the Information Revolution. Intellectual and technological revolutions seem to go hand in hand, historically, but I don’t, to use one example, think that the Freudian Revolution had a dramatic effect on warfare. The First World War was waged much like the wars of the 19th century, and to the extent that it wasn’t, those changes did not come about because of Freud. 

 

What you find when you study those revolutions is that while they may have resulted in new technologies like the machine gun or the airplane, none of them changed the concept of war. Even the Copernican Revolution, which was similar to the Information Revolution in the sense that it dislocated our sense of ourselves as existing in a particular space and time, didn’t have this effect. The concept of war remained intact in the wake of those revolutions, whereas we are finding that the concept of war itself is changing as a result of the Information Revolution. 

How has the Information Revolution changed the concept of war?

Taddeo: It goes back to the shift to the non-physical domain; war has always been perceived as something distinctly physical involving bloodshed and destruction and violence, all of which are very physical types of phenomena. If you talk to people who have participated in warfare, historically, they will describe the visceral effects of it—seeing blood, hearing loud noises, shooting a gun, etc. Warfare was, in the past, always something very concrete. 

This new kind of warfare is non-physical; of course it can still cause violence, but it can also be computer to computer, or it can be an attack on certain types of information infrastructure and still be an act of war. Consider the Estonian cyber-attack, where you had a group of actors launching an attack on institutional websites in Estonia; there were no physical casualties, there was no physical violence involved. Traditional war was all about violence; the entire point of it was to physically overpower your enemy. That’s a major change. It shifts the ethical analysis, which was previously focused only on minimizing bloodshed. But when you have warfare that doesn’t lead to any bloodshed, what sort of ethical framework are you going to apply?

“When you have warfare that doesn’t lead to any bloodshed, what sort of ethical framework are you going to apply?”

For some time now, Just War Theory has been one of the main ethical frameworks for examining warfare. You seem to argue that its modes of analysis break down when applied to information warfare. For instance, you note that the principle that war ought only to be pursued “as a last resort” may not apply to information warfare. Why is that?

Taddeo: Well first I would say that as an ethical framework Just War Theory has served us well up to this point. It was first developed by the Romans, and from Aquinas on many of the West’s brightest minds have contributed to it. It’s not that it needs to be discarded; quite the contrary, there are some aspects of it that need to be kept as guiding principles going forward. Still, it’s a theory that addresses warfare as it was known historically, as something very physical. 

The problem with the principle of “last resort” is that while, yes, we want physical warfare to be the last choice after everything else, it might not be the case that information warfare is to be a “last resort,” because it might actually prevent bloodshed in the long run. Suppose that a cyber-attack could prevent traditional warfare from breaking out between two nations; by the criteria of Just War Theory it would be an act of war and thus only justifiable as a last resort. And so you might not want to apply the Just War framework to warfare that is not physically violent.  

You also note that the distinction between combatants and civilians is blurred in information warfare, and that this also has consequences for Just War Theory, which makes liberal use of that distinction. How so?

Taddeo: Well until a century ago there was a clear-cut distinction between the military and civilians—you either wear a uniform or you don’t, and if you do, you are a justifiable military target. This distinction has been eroded over time, even prior to the Information Revolution; civilians took part in a number of twentieth century conflicts. But with information warfare the distinction is completely gone; not only can a regular person wage information warfare with a laptop, but also a computer engineer working for the U.S. government or the Russian government can participate in information warfare all day long and then go home and have dinner with his or her family, or have a beer at the pub. 

The problem is, if we don’t have any criteria, any way of judging who is involved in a war and who is not, then how do we respond? Who do we target? The risk is that our list of targets could expand to include people who we would now consider civilians, and that means targeting them with physical warfare, but also with surveillance, and that could be very problematic. Surveillance is a particularly thorny issue here, because if we don’t know who we have to observe, we may end up scaling up our surveillance efforts to encompass entire populations and that could have very serious effects in the realm of individual rights. 

You have identified the prevention of information entropy as a kind of first principle in an ethical framework that can be applied to information warfare—is that right, and if so, does that supplant the saving of human life as our usual first principle for thinking about these things?

Taddeo: I think they are complimentary. First of all, a clarification is in order. Information entropy has nothing to do with physics or information theory; it’s not a physical or mathematical concept. Entropy here refers to the destruction of informational entities, which is something we don’t want. It could be anything from destroying a beautiful painting, to launching a virus that damages information infrastructure, and it can also be killing a human being. Informational entities are not only computers; informational entities identify all existing things, seen from an informational perspective. In this sense an action generating entropy in the universe is an action that destroys, damages or corrupts a beautiful painting or damages information infrastructures, and it can also be killing a human being. Any action that makes the information environment worse off generates entropy and therefore is immoral. In this sense the prevention of information entropy is consistent with the saving of human life, because human beings contribute a great deal to the infosphere—killing a human being would generate a lot of information entropy. 

This is all part of a wider ethical framework called Information Ethics, mainly developed by Luciano Floridi. Information Ethics ascribes a moral stance to all existing things. It does not have an ontological bias, that is to say it doesn’t privilege certain sorts of beings. This does not mean that according to Information Ethics all things have the ‘same’ moral value but rather that they ‘share’ some common minimal rights and deserve some minimal respect. Here, the moral value of a particular entity would be proportional to its contributions to the information environment. So a white paper with one dot on it would have less moral value than say a book of poems, or a human being. That’s one way of thinking about this. 

This article available online at:

http://www.theatlantic.com/technology/archive/2012/03/cyber-and-drone-attacks-may-change-warfare-more-than-the-machine-gun/254540/

 

 

 

TSP closely watches retirement provisions in transportation bills

By Amanda Palleschi

March 26, 2012

Federal retirees might soon be able to roll their annual leave into their Thrift Savings Plan accounts, depending on the fate of pending legislation.

Currently, federal employees’ annual leave is cashed out when they retire or resign, and they are reimbursed for it directly with a check. Under a House proposal, an employee could roll that money into the TSP plan so that the funds would not be taxed until withdrawal, according to Kim Weaver, external affairs director for the Federal Retirement Thrift Investment Board.

The measure is included in the transportation bill up for a vote in the House in the coming days, and is among provisions being tracked closely by the board, which oversees the TSP.

The other measure, included in the version of the transportation bill passed by the Senate earlier this month, would allow the Internal Revenue Service to enforce federal tax levies on civil servants by taking funds from their TSP accounts.

The board asked lawmakers for a clarification on that provision and is wary of its possible passage into law, Weaver said during a Monday meeting.

The IRS claims it has the authority to levy funds in TSP accounts, but under current law, plan funds credited to beneficiaries cannot be used for anything that does not help them.

Board officials have said subjecting the TSP funds to levy would change their intended purpose.

The fate of both provisions will depend on action on the House bill. House leadership is exploring a short-term extension on transportation authorizations, which would delay a vote on the measure, according to Weaver.

It’s uncertain whether the House will have enough support to take up its version of the legislation or whether it will default to the more fed-friendly Senate-passed bill. Currently, the House bill contains provisions that would require federal workers and members of Congress to contribute a total of 1.5 percent extra to their pensions over three years, beginning in 2013.

The House version also would eliminate a current provision in the law that supplements the retirement benefits of Federal Employees Retirement System members who are not subject to mandatory retirement and leave before age 62, also known as the FERS annuity supplement. In addition, it would require that those hired (or newly elected) after Dec. 31, 2012, be placed under a high-five average salary calculation for annuities rather than the current high-three calculation.

The Senate bill contains language allowing phased-in retirements but does not include the pension provisions.

http://www.govexec.com/pay-benefits/2012/03/tsp-closely-watches-retirement-provisions-transportation-bills/41573/

 

First Air Force female four-star general confirmed

Posted 3/28/2012   

 

3/28/2012 – WASHINGTON (AFNS) — The Senate confirmed Air Force Lt. Gen. Janet Wolfenbarger for promotion March 26, making her the first female four-star general in Air Force history.

Wolfenbarger currently serves as the military deputy in the Office of the Assistant Secretary of the Air Force for Acquisition here and she is one of four female lieutenant generals in the Air Force.

“This is an historic occasion for the Air Force,” said Secretary of the Air Force Michael B. Donley. “General Wolfenbarger’s 32 years of service, highlighted by extraordinary leadership and devotion to duty, make her exceptionally qualified for this senior position and to serve as the next commander of Air Force Materiel Command.”

 

“I am honored to have been confirmed by the Senate for promotion to the rank of General and to serve as commander of Air Force Materiel Command. Until I take command of AFMC, I will continue to focus on the important Air Force acquisition work here at the Pentagon,” said Wolfenbarger.

Wolfenbarger, a native of Beavercreek, Ohio, was commissioned as a second lieutenant in 1980 after graduating in the first class with female cadets at the Air Force Academy.

She also holds a graduate degree in aeronautics and astronautics from the Massachusetts Institute of Technology, Cambridge.

The general has held several positions in the F-22 System Program Office at Wright-Patterson AFB, Ohio; served as the F-22 lead program element monitor at the Pentagon, and was the B-2 System program director for the Aeronautical Systems Center, Wright-Patterson AFB.

She commanded ASC’s C-17 Systems Group, Mobility Systems Wing and was the service’s director of the Air Force Acquisition Center of Excellence at the Pentagon, then served as director of the Headquarters AFMC Intelligence and Requirements Directorate, Wright-Patterson AFB.  Prior to her current assignment, Wolfenbarger was the vice commander of Air Force Materiel Command, Wright-Patterson AFB.

She has been awarded the Distinguished Service Medal, the Legion of Merit, the Meritorious Service Medal, the Air Force Commendation Medal, the Air Force Achievement Medal, the National Defense Medal and the Global War on Terrorism Medal.

Wolfenbarger received her third star in December 2009 and became the Air Force’s highest-ranking woman in January 2010.

Women currently make up 9.1 percent of the Air Force’s general officer ranks. In addition to the 4 female lieutenant generals, there are 12 major generals and 11 brigadier generals.

“This is an exciting time that pays homage to the generations of female Airmen, whose dedication, commitment and sacrifice helped open the doors for General Wolfenbarger and other female Airmen who will follow,” said Air Force Chief of Staff General Norton Schwartz.

Gen. Wolfenbarger’s change of command at Wright-Patterson Air Force Base, Ohio is scheduled for June 5.

 

IT jobs will grow 22% through 2020, says U.S.

Biennial update says offshoring will limit growth in some occupations, while healthcare IT provides a boost

Patrick Thibodeau

March 29, 2012 (Computerworld)

WASHINGTON — U.S. officials on Thursday said that offshoring will hurt the growth of U.S. programming jobs in this decade, though expansion of healthcare IT and mobile networks will in turn increase demand for software developers, support technicians and systems analysts.

By 2020, employment in all computer occupations is expected to increase by 22%, but some IT fields will fare better than others, according to the U.S. Bureau of Labor Statistics (BLS) biennial update of employment projections.

Demand for software developers will be the strongest in this period, with increases ranging from 28% to 32%, depending on the type of software development.

The BLS update imagines what IT employment will look like through 2020.

The agency’s forecasts, particularly for technology-related jobs, are often controversial because they can’t account for rapid market changes and tech disruptions. But its estimates are often cited in various policy debates on issues ranging from education to immigration.

The IT employment growth rate projected by the BLS was characterized as “anemic” by Victor Janulaitis, CEO of Janco Associates, a research firm that analyzes IT wage and employment trends.

“When you consider the overall demand for systems and applications in high-growth markets like China and India, [the BLS projections] mean the U.S. will be doing a diminishing portion of the development and implementation work,” said Janulaitis. “If that’s the case, the U.S. will no longer be the leader in IT.

“The BLS projections are a bad sign for the U.S. IT graduates from universities. Those numbers do not cover the net growth necessary to give all of the graduates jobs,” Janulaitis added.

The outlook varies from occupation to occupation. Here’s a look at the BLS projections for various tech jobs.

 

Database administrators

Demand for database administrators is expected to increase by 31%, with 33,900 jobs being added this decade as enterprises endeavor to cull valuable information from an ever-growing mountain of data.

The field employed 110,800 people in 2010 at a median salary of $73,490.

 

IT managers

For IT managers, employment is projected to increase by 55,800 jobs, or 18%, to 363,700 jobs by 2020.

The BLS said growth in the healthcare industry and the need for more IT security may spur an increase in the number IT management jobs, but the agency added that “cloud computing may shift some IT services to computer systems design and related services firms, concentrating jobs in that industry.”

The median pay for IT managers in 2010 was $115,780.

 

Computer programmers

The IT occupation expected to show the weakest growth is computer programming, which is also the profession most susceptible to offshoring.

The number of people employed as computer programmers will increase by just 12% through the decade, from 363,100 in 2010 to 406,800 by 2020, the BLS projects.

That’s less than the expected 14% increase in the number of U.S. jobs across all occupations, IT and otherwise, over the same period, according to the BLS.

Offshoring was blamed for relatively weak growth in demand for computer programmers.

“Since computer programming can be done from anywhere in the world, companies often hire programmers in countries where wages are lower,” said the BLS. “This ongoing trend will limit growth for computer programmers in the United States.”

The median pay for programmers in 2010 was $71,380.

 

Help desk, technical support

The healthcare industry’s growth and its shift to electronic records will help spur demand for computer support specialists, a job category that includes help desk personnel.

There were 607,000 people employed in computer support in 2010. That figure is expected to grow to 717,000 by 2020, for an 18% increase.

The 2010 median pay for computer support specialists was $46,260.

Technical support workers are also being hurt by a shift of jobs to other countries, though the BLS is a little more optimistic about the outlook for this occupation.

“A recent trend to move jobs to lower-cost regions of the United States may offset some loss of jobs to other countries,” the bureau said, referring to the trend of hiring people who handle call center duties largely from their homes.

 

Systems analysts

The BLS projects that the number of computer systems analysts will grow by 22% through 2020, thanks to the spread of mobile networks and the expanded use of computerized healthcare tools, such as e-prescription systems.

The number of computer system analysts in the U.S. in 2010 was 544,000. That figure is expected to rise 22% to 664,800 by 2020. The median salary of a systems analyst was $77,740 in 2010.

 

Software developers

The BLS said there were 913,100 software developers in the U.S. 2010; they were earning a median salary of $90,530.

The bureau forecasts that the number of software development jobs will increase by 30%, or 270,900, through 2020.

The fastest rate of growth (32%) will be for system software developers, and the lowest (28%) will be for application developers, the bureau said.

 

Other IT occupations

In 2010, there were 347,200 network and computer systems administrators employed in the U.S., earning a median wage of $69,160. Employment in these occupations is expected to grow by 96,600 jobs, or 28%, through 2020.

The BLS lumps information security analysts, Web developers and computer network architects in a single category. It said a combined 302,300 people held those positions in 2010 and projected that employment in that category would grow 22% to 367,900 jobs by 2020. The median pay for all workers in that category was $75,660 in 2010.

 

BLS critics

Ron Hira, an associate professor of public policy at the Rochester Institute of Technology, said the BLS IT forecasts have been wildly wrong in the past.

“Volatile occupations tend to be subject to bad forecasts, and it’s clear that computer occupation employment levels are very hard to forecast,” said Hira.

“The forecasts are biased toward the most recent history in the occupation,” he said.

Hira said he would place more stock in growth projections for a predictable profession. Citing primary school teachers as an example, he pointed out that the BLS can estimate the number of births during the decade and factor in teacher-student ratios to reach an estimate of employment growth.

The BLS has “no methodology to estimate technological disruptions that can increase demand for computer occupations,” said Hira, citing the rapid increases in the use of the Internet and ERP systems as examples of IT disruptions.

David Foote, CEO of Foote Partners, an IT labor market research firm, scoffed at the BLS’s projections and said they extended too far out into the future to be reliable.

In light of “current market volatility and uncertainty which is unprecedented,” Foote said that anyone who makes a 10-year IT employment projection “is kidding themselves.”

The projections don’t take into account the rate of technological change, he said.

Foote said the BLS only identifies a “small group” of IT jobs and doesn’t track the new kinds of technology jobs that combine business and IT experience to create hybrid professions. Much of that is being driven by the demands to find useful ways to apply use so-called big data in a business.

 

Security breach may have compromised millions of debit and credit cards

 

Washington Post

By Brady Dennis, Published: March 30

Visa and MasterCard on Friday were trying to determine the extent of a possible security breach at a third-party vendor that experts say could compromise the credit-card and debit-card information of millions of Americans.

The two companies say they have notified law enforcement officials and alerted banks about the potential data theft, even as they seek to assure customers that their own systems had not been breached.

Both companies also emphasize that customers are not held responsible for any fraudulent purchases made on their cards.

George Ogilvie, a spokesman for the Secret Service, confirmed that the agency is investigating the matter but declined to elaborate, saying the inquiry is ongoing.

The security lapse involved Global Payments, an Atlanta-based firm that describes itself as “one of the world’s largest electronic transaction processing companies.”

In a statement Friday afternoon, Global Payments said it had determined in early March that “card data may have been accessed.” It said that company officials immediately contacted federal law enforcement, brought in information technology forensics experts to investigate and notified “appropriate industry parties to allow them to minimize potential cardholder impact.”

“It is reassuring that our security processes detected an intrusion. It is crucial to understand that this incident does not involve our merchants or their relationships with their customers,” Global Payments chief executive Paul R. Garcia said.

Global Payments stock fell more than 9 percent on news of its involvement in the data breach before trading on it was halted about noon. The company said it was continuing to investigate and would hold a conference call at 8 a.m. Monday.

Both MasterCard and Visa on Friday were quick to assure customers that their own systems remained safe and that they had alerted banks to any potential problems.

MasterCard said in a statement that its “own systems have not been compromised.” Visa officials also insisted that there had been “no breach of Visa systems” and that it had contacted card issuers with details about accounts that might have been compromised “so they can take steps to protect consumers through independent fraud monitoring and, if needed, reissuing cards.”

In addition to notifying law enforcement and banks, MasterCard said Friday that an independent data-security organization was conducting an ongoing forensic review of the matter.

Brian Krebs, a computer security expert who first reported the theft on his blog KrebsonSecurity.com, wrote that sources in the financial sector had described the data theft to him as “massive” and believed it could involve more than 10 million compromised card numbers.

Avivah Litan, a Maryland-based fraud analyst at the information-technology-research firm Gartner, also said Friday that she had spoken with contacts in the card business “who are seeing signs of this breach mushroom” and also believe the number of compromised numbers would reach into the millions. She said it appears that the breach at least partially involved a parking and garage company in the New York City area.

“The industry has spent billions of dollars on trying to secure the payment systems. ... They have been at this for years, trying to get merchants and payment processors and taxicabs and everything to secure their payment systems, and it’s just not working,” Litan said in an interview.

She said the United States lags behind many countries that have migrated to microchip technology in credit cards, which have cut back significantly on fraud. “We’re the only developed country that’s not using it,” Litan said.

Neither MasterCard nor Visa actually issues cards to consumers or lends money. Banks such as Wells Fargo and Bank of America typically issue the cards, while MasterCard and Visa oversee the individual transactions and charge merchants fees each time a card is swiped.

The latest incident is part of an ongoing string of electronic attacks against corporations, schools and government agencies that have repeatedly put the confidential information of Americans at risk. Last June, for instance, hackers breached a network at Citigroup and gained access to credit card data for more than 360,000 North American customers.

According to a report by the research firm Javelin, identity fraud increased during 2011 by 13 percent, and more than 11.6 million U.S. adults became victims.

One key factor behind the increase in fraud, the firm found, was the 67 percent increase in the number of Americans affected by data breaches. Research showed that victims of data breaches are 9.5 times more likely to fall prey to identify fraud than customers who had not received notice of a potential data breach.

Advertisements

From → Uncategorized

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: