Skip to content

December 15 2012

December 17, 2012

15December2012

Newswire

 

Can an executive order protect against a ‘cyber Pearl Harbor?’

FCW.com

By Amber Corrin

Dec 07, 2012

http://fcw.com/Articles/2012/12/07/cyber-pearl-harbor-executive-order.aspx?s=fcwdaily_101212&p=1

 

The devastating attack on Navy ships at Pearl Harbor in 1941 has become a familiar metaphor for current cyber vulnerabilities. In this image, the USS Arizona burns as the relentless assault by Japanese aircraft continues. (US Navy photo)

An executive order on cybersecurity appears to be close to reality, according to insiders who say draft versions have made their way through agencies for feedback. But as the nation remembers one day that will forever live in infamy, will a White House measure be enough to defend against another?

The attack on Pearl Harbor by the Japanese on Dec. 7, 1941, which took the U.S. Navy by surprise early in the Second World War, has become an often-used metaphor for a feared devastating cyber attack that could happen if America’s defenses are not in place soon.

While White House officials are not commenting on where the cyber executive order currently stands, at least one draft order, dated Nov. 21, appears to have gathered enough momentum by carefully addressing certain measures that stalled legislation in Congress earlier this year.

 

Read the Nov. 21 draft executive order at http://fcw.com/Articles/2012/12/07/~/media/GIG/FCWNow/2012/December/CyberExecOrder1121Draft.ashx

 

That version of the EO does not include explicit mandates on the private sector, meaning that owners and operators of most critical infrastructure would participate on a voluntary basis.

However, it does outline orders for specific government agencies to help develop comprehensive framework for identifying, protecting against and sharing information on cyber threats. For example, the secretary of the Homeland Security Department, who plays a leading role in the plan, is charged with producing unclassified reports on specific, targeted threats and a system for disseminating them; the Office of the Director of National Intelligence would, along with DHS, establish a system for tracking such reports and incidents. The Defense Department also is called in to create procedures for industry’s voluntary participation, and other agencies, including the National Institute for Standards and Technology, also have key roles.

With limited powers, the executive order does not take the place of legislation, which Congress is expected to take up again in 2013. It does at least try to address some of the sticking points that halted lawmaking over the past year.

“This order challenges agencies to actually work and develop a framework together to determine just what those areas are that possibly do need additional regulation,” said W. Hord Tipton, executive director of (ISC)2 and former CIO at the Interior Department. “That often gets down to the nitty-gritty that stymies regulation – industries that control 90 percent of our critical infrastructure simply don’t like regulation. Finding that appropriate balance is often times the real challenge.”

Information-sharing, as in the failed legislation preceding the EO, is addressed in the order, including in provisions to expand the use of outside subject-expert consultants and expedite security clearances to aid in the process. The measure also directs the DHS secretary to coordinate incentives for voluntary industry participation, another sticking point in proposed legislation, although it does not specify exactly what those incentives could be.

“They’ll be looking at potential incentives – they have to be coordinated with secretaries of Treasury and Commerce, and then have to be reviewed by DOD and [General Services Administration] for the merits and the possible changes to procurement practices,” Tipton said, adding that the participation of the private sector could continue to be a hurdle. “That’s going to require cooperation with the sectors themselves, and again that’s why legislation has been difficult – you have to have that trust developed before you can get the appropriate consultations to happen.”

How far is any of that from actually reaching fruition? It is difficult to tell, although insiders say the draft EO has gone through at least some of the interagency process required to get feedback on implementation – and it hasn’t necessarily been easy, with some organizations likely disagreeing on certain points.

“This has been struggle for the White House to get interagency consensus – some people thought this would be out well before elections. We know they waited to find out if legislation was going to pass, but I’m led to believe at this point there was more of an issue in getting this out of the overall interagency process of approval; that’s been a key part of this,” said Tim Sample, vice president and sector manager for special programs at Battelle.

The challenges to the EO likely mirror the parts of proposed legislation that created the most controversy, including mandates on private industry and direction of who is in charge of what. Some say that without such explicit requirements, the failed cybersecurity bills – and the EO – may be lacking teeth.

“It was hard to find anything in the legislation that would lead to the crafting of regulations, for example, that actually put required continuity and enforcement authority to effect change,” Tipton said. “It’ll be a kabuki dance between the executive branch, DHS, some of the other agencies and the key players in the critical infrastructure agency. But where really are the teeth to get the critical infrastructure sectors to go beyond what they’re currently doing?”

Stopping short of such comprehensive measures could leave the U.S. vulnerable, Sample noted – not unlike it was 71 years ago.

“Though the government was making some preparations by early December 1941, the country itself was not galvanized in support of war. It took Pearl Harbor to galvanize the country overnight. The fact is something needs to be done to galvanize, and I don’t get that – though there is a lot of positive – from this EO given the threats that are looming out there,” Sample said. “While I understand the pros and cons and limitations of government, this is still an effort in which you hope everybody is going to play, as opposed to leading nation and saying with resolve that we’re not going to be left high and dry in a major cyber attack.”

 

 

 

 

New Taxes to Take Effect to Fund Health Care Law

NYTimes
By ROBERT PEAR
Published: December 8, 2012

http://www.nytimes.com/2012/12/09/us/politics/new-taxes-to-take-effect-to-fund-health-care-law.html

 

WASHINGTON — For more than a year, politicians have been fighting over whether to raise taxes on high-income people. They rarely mention that affluent Americans will soon be hit with new taxes adopted as part of the 2010 health care law.

The new levies, which take effect in January, include an increase in the payroll tax on wages and a tax on investment income, including interest, dividends and capital gains. The Obama administration proposed rules to enforce both last week.

Affluent people are much more likely than low-income people to have health insurance, and now they will, in effect, help pay for coverage for many lower-income families. Among the most affluent fifth of households, those affected will see tax increases averaging $6,000 next year, economists estimate.

To help finance Medicare, employees and employers each now pay a hospital insurance tax equal to 1.45 percent on all wages. Starting in January, the health care law will require workers to pay an additional tax equal to 0.9 percent of any wages over $200,000 for single taxpayers and $250,000 for married couples filing jointly.

The new taxes on wages and investment income are expected to raise $318 billion over 10 years, or about half of all the new revenue collected under the health care law.

Ruth M. Wimer, a tax lawyer at McDermott Will & Emery, said the taxes came with “a shockingly inequitable marriage penalty.” If a single man and a single woman each earn $200,000, she said, neither would owe any additional Medicare payroll tax. But, she said, if they are married, they would owe $1,350. The extra tax is 0.9 percent of their earnings over the $250,000 threshold.

Since the creation of Social Security in the 1930s, payroll taxes have been levied on the wages of each worker as an individual. The new Medicare payroll is different. It will be imposed on the combined earnings of a married couple.

Employers are required to withhold Social Security and Medicare payroll taxes from wages paid to employees. But employers do not necessarily know how much a worker’s spouse earns and may not withhold enough to cover a couple’s Medicare tax liability. Indeed, the new rules say employers may disregard a spouse’s earnings in calculating how much to withhold.

Workers may thus owe more than the amounts withheld by their employers and may have to make up the difference when they file tax returns in April 2014. If they expect to owe additional tax, the government says, they should make estimated tax payments, starting in April 2013, or ask their employers to increase the amount withheld from each paycheck.

In the Affordable Care Act, the new tax on investment income is called an “unearned income Medicare contribution.” However, the law does not provide for the money to be deposited in a specific trust fund. It is added to the government’s general tax revenues and can be used for education, law enforcement, farm subsidies or other purposes.

Donald B. Marron Jr., the director of the Tax Policy Center, a joint venture of the Urban Institute and the Brookings Institution, said the burden of this tax would be borne by the most affluent taxpayers, with about 85 percent of the revenue coming from 1 percent of taxpayers. By contrast, the biggest potential beneficiaries of the law include people with modest incomes who will receive Medicaid coverage or federal subsidies to buy private insurance.

Wealthy people and their tax advisers are already looking for ways to minimize the impact of the investment tax — for example, by selling stocks and bonds this year to avoid the higher tax rates in 2013.

The new 3.8 percent tax applies to the net investment income of certain high-income taxpayers, those with modified adjusted gross incomes above $200,000 for single taxpayers and $250,000 for couples filing jointly.

David J. Kautter, the director of the Kogod Tax Center at American University, offered this example. In 2013, John earns $160,000, and his wife, Jane, earns $200,000. They have some investments, earn $5,000 in dividends and sell some long-held stock for a gain of $40,000, so their investment income is $45,000. They owe 3.8 percent of that amount, or $1,710, in the new investment tax. And they owe $990 in additional payroll tax.

The new tax on unearned income would come on top of other tax increases that might occur automatically next year if President Obama and Congress cannot reach an agreement in talks on the federal deficit and debt. If Congress does nothing, the tax rate on long-term capital gains, now 15 percent, will rise to 20 percent in January. Dividends will be treated as ordinary income and taxed at a maximum rate of 39.6 percent, up from the current 15 percent rate for most dividends.

Under another provision of the health care law, consumers may find it more difficult to obtain a tax break for medical expenses.

Taxpayers now can take an itemized deduction for unreimbursed medical expenses, to the extent that they exceed 7.5 percent of adjusted gross income. The health care law will increase the threshold for most taxpayers to 10 percent next year. The increase is delayed to 2017 for people 65 and older.

In addition, workers face a new $2,500 limit on the amount they can contribute to flexible spending accounts used to pay medical expenses. Such accounts can benefit workers by allowing them to pay out-of-pocket expenses with pretax money.

Taken together, this provision and the change in the medical expense deduction are expected to raise more than $40 billion of revenue over 10 years.

 

 

Combat UAV Moves Closer To Full Autonomy

December 7, 2012

By Paul Kruczkowski, Editor

http://www.rfglobalnet.com/doc.mvc/combat-uav-moves-closer-to-full-autonomy-0001?sectionCode=Welcome&templateCode=SponsorHeader&user=2753709&source=nl:35831

 

 

The Northrop Grumman X-47B unmanned combat air systems (UCAS) demonstrator, capable of autonomous flight and chock-full of RF and microwave payloads, made its first land-based catapult launch on November 29 at the Naval Air Systems Command (NAVAIR) in Patuxent River, Md. This milestone was a critical step in verifying the aircraft’s ability to handle the stress of a catapult launch, and in ultimately integrating the UCAS into an aircraft carrier flight deck environment. Another X-47B demonstrator was craned onto the deck of USS Harry S. Truman in Norfolk, Va., the same week, in order to test the telemetry and communication systems required for flight deck, elevator, and hangar bay maneuvering.

One interesting aspect of both the catapult launch and the flight deck testing is a new wireless, handheld device called a Control Display Unit (CDU), also designed by Northrop Grumman. The CDU will allow the deck operator to wirelessly control engine thrust, nose wheel steering, and brakes. Just as a pilot follows the director’s hand signals to move on the deck, the deck operator will use the CDU to move the X-47B quickly and precisely into the catapult for launch, or out of the landing area following recovery.

These tests set the stage for carrier testing at sea, including the highly anticipated first catapult launch and retrieve of the autonomous unmanned air system in mid-2013. The software that will make this all possible was tested earlier this year in a manned F-18, which performed carrier landings completely under software control. The software/system utilizes precision GPS installed on both the X-47B and the carrier, and provides a glide slope path to guide the aircraft onto the ship.

As I reported in my article on UAV and electronic payload trends (in our Electronic Military & Defense magazine, the X-47B is a glimpse into the future of unmanned combat aircraft. Its autonomous flight capability, including autonomous refueling, will greatly increase the reach of the carrier-based force, providing a greater standoff distance between the target and the aircraft carrier from which the X-47B is launched. The X-47B will be able to deliver up to 4,500 lbs. of smart munitions and return to the carrier on its own, without endangering the life of a pilot.

Since these two aircraft are demonstration units designed to prove the concept of carrier-based autonomous flight, the RF and microwave sensors are not the primary focus — at least not yet. I would expect a production version of the X-47B to require advanced synthetic aperture radar with ground moving target indicator, electronic support measures, various communications and data links, conformal electro-optic day/night cameras, and SIGINT equipment, which should provide plenty of opportunities for those involved in electronic payload system design.

 

 

Microsoft may not want smashing Surface RT tablet sales

Analysts say Microsoft is likely looking to seed market for key Surface partners like Lenovo and Samsung

Matt Hamblen

December 10, 2012 (Computerworld)

 

The reported slow early sales of Microsoft’s Surface RT tablet have raised a question among IT analysts — does Microsoft truly want to produce boffo sales of the new device?

Some analysts say that Microsoft can’t afford to have smashing Windows RT- or Windows 8-based Surface tablet sales to avoid outselling (and offending) key partners Lenovo, Samsung and others that sell branded Windows-based tablets. The partners must pay Microsoft a licensing cost to run Windows on the tablets.

“I’ve believed all along that the [Microsoft] goal is not to be the leading tablet hardware vendor, but rather to [use Surface to] seed the market with Windows 8 tablets,” said Jack Gold, an analyst at J. Gold Associates.

“Microsoft wants to have enough devices sold to get people interested in Windows 8, then basically turn over the market to its Original Equipment Manufacturers (OEMs),” Gold added. “This is the same strategy that Google uses with Android phones and tablets.”

The real money for Microsoft and Google comes not from hardware, such as Surface or Nexus 7 tablets, but from sales of apps and services from the App Store, Google Play or iTunes, Gold and other analysts say.

“Modest sales for Surface could still be to Microsoft’s advantage by showing vendors that Windows 8 tablets have legs in the market and in [creating] an installed base from which to build,” Gold said.

While IDC analyst Ryan Reith said Microsoft could afford for Surface to outsell other Windows 8 tablets, he also mostly agreed with Gold.

“Given Surface RT’s limited distribution, it makes sense that it probably won’t outsell others,” Reith said.

“Microsoft would have pushed these tablets to all channels if it was going for large volume, but given that it didn’t tells me that it is more about setting the bar for what it wants its OEM’s to develop and about platform exposure.” he added. “Basically, the Surface acts as a handbook for ‘This is how we’d like you to build tablets on Win 8.'”

Reports surfacing last week said that Microsoft is ready to supply Surface tablets to retailers beyond its own 31 stores and 34 holiday specialty stores.

A Microsoft announcement of that kind could come as early as Monday or tomorrow, according to several industry sources.

The rationale for adding more stores would be sluggish Surface sales.

Boston-based brokerage firm Detwiler Fenton last week estimated that Microsoft would sell 500,000 to 600,000 Surface RT tablets in all of 2012, while IHS iSuppli projects sales of 1.3 million Surface units by year’s end.

Surface RT, a 10.6-in. tablet, went on sale starting at $499 on Oct. 26.

The 10.6-in. Surface Pro, formally called Surface with Windows 8 Pro, will go on sale in January, starting at $899.

Analysts say sales of the Google Nexus 7, which launched in June for $199, is now at about 1 million a month, up from 500,000 early on. Apple, meanwhile, has typically sold several million units of any of its iPad tablets and iPhones in the first few days of sales.

Microsoft today didn’t respond to a request for comment on its sales or retail strategy.

 

 

FCC urges FAA to let passengers run gadgets during takeoff

 

The FCC weighed in on an FAA request for comments about its policy requiring passengers to stow mobile devices during takeoff and landing

PC World

By Jared Newman

December 7, 2012 06:27 PM ET

 

Airplane passengers aren’t the only ones fed up with restrictions on the use of portable electronic devices during takeoff and landing.

Julius Genachowski, chairman of the Federal Communications Commission, has written a letter urging the Federal Aviation Administration to change its rules. The FAA is reviewing its long-held policy against the use of electronics during takeoff and landing, and Genachowski said he supports that process.

“This review comes at a time of tremendous innovation, as mobile devices are increasingly interwoven with our daily lives,” Genachowski wrote, according to The New York Times. “They empower people to stay informed and connect with friends and family, and they enable both large and small businesses to be more productive and efficient, helping drive economic growth, and boost U.S. competitiveness.”

The FAA previously studied the potential for electromagnetic interference caused by portable electronics in 2006. Although the study didn’t find any evidence of grave danger during takeoff and landing, the agency erred on the side of caution, saying it also couldn’t find enough evidence to change its longstanding policy. (It’s worth noting, though, that American Airlines pilots are allowed to use iPads instead of printed flight manuals.)

Under the rules, airlines can allow specific electronic devices to be used at all times, but only if the airline can prove there’s no danger. To do so, airlines must send each device into the air, with no passengers on board. It’s an expensive process even for one device, let alone the hundreds of tablets, laptops, and e-readers that hit the market every year.

In August, the FAA announced that it’s reviewing its policies for all portable electronics except cell phones. The plan is to form a working group with government and industry parties, and eventually set new rules on the use of approved electronics during all phases of flight.

That sounds like great news for travelers, but this is the government after all, so don’t expect a quick change in policy. The FAA hasn’t provided any updates on its plans in the last three months, even though it was supposed to formally establish a working group this fall. Once the group is formed, it’ll still take six months to go over the rules, and probably even longer to implement any changes.

But as pressure to change the rules increases–even from within the U.S. government–the FAA won’t be able to drag its feet forever. Let’s hope this is the beginning of the end of stowing away our gadgets during takeoff and landing.

 

 

 

IEEE Institute

 

The Next Generation of Surgical Robots

Smaller, lighter, and less expensive

By ANIA MONACO7 December 2012

http://theinstitute.ieee.org/technology-focus/technology-topic/the-next-generation

 

Surgery can be anything but stress free. Beyond the anxiety over the procedure’s outcome, patients must often deal with complications, long recovery periods, and large, painful scars. Minimally invasive surgeries, which have become increasingly common, alleviate many problems. With their smaller incisions, they tend to result in faster healing and fewer post-op woes.

In a typical laparoscopy procedure, for example, a surgeon makes small cuts in the patient’s skin through which small operating tools and a camera are inserted. The surgeon then views the operating site, usually on a monitor, while controlling the tools. But minimally invasive surgeries, used for such procedures as removing an ovarian cyst or a prostate gland, have their challenges. The rigid instruments used are typically more than 30 centimeters long—which can exaggerate a surgeon’s normal hand tremor.

In addition, surgeons must deal with what’s known as the fulcrum effect: When the hand moves to the right, the tip of the surgical instrument moves to the left. And because such surgery requires extensive operating skills and dexterity, surgeons face a difficult learning curve.

Enter robots, which have made their way into operating rooms to alleviate some of those drawbacks. Among the most popular is the da Vinci surgical system, introduced in 1999 by Intuitive Surgical of Sunnyvale, Calif. Unlike a traditional surgeon, a doctor using the robot does not handle most of the surgical instruments directly. Instead, after making the small incisions, the surgeon inserts instruments attached to three or four robotic arms, one of which holds a stereoscopic camera.

The surgeon then sits at a control console near the operating table, looks through a viewfinder to examine 3-D images from inside the patient, and uses joystick-like controls located beneath the screen to manipulate the surgical tools. The da Vinci, with more than 2400 systems installed at nearly 2000 hospitals worldwide, is now used in about 80 percent of prostatectomies in the United States.

But such surgical robots also have their problems, not the least of which is their expense. At more than US $1 million, a da Vinci system is a steep investment for a hospital. Plus, the robot’s size and weight—about 180 centimeters tall and more than 900 kilograms—can be an issue. And if a complication arises and the operation must be converted to an open surgery, it’s difficult to move the robot out of the way quickly so the surgeon can step in.

IEEE Fellow Guang-Zhong Yang and other engineers are hard at work developing a new generation of robots that can give surgeons a wider variety of options. Yang is director and cofounder of the Hamlyn Centre for Robotic Surgery and deputy chairman of the Institute of Global Health Innovation, both at Imperial College London. He was also a speaker at the IEEE Life Sciences Grand Challenges Conference, held in October in Washington, D.C.

“My vision is that future surgical robots shouldn’t be large and expensive machines that are accessible only to the privileged few,” Yang says. “Robots should be a lot smaller, more affordable, and integrated more seamlessly with normal surgical work flow.”

 

Yang and his colleagues, with a bit of reptilian inspiration, have built such a robot.

 

SURGICAL SNAKE

Many surgeries, including those performed on the heart, throat, and stomach, involve getting to tissue deep within the body. That can be a challenge for a minimally invasive approach because the instruments are long and rigid. But Yang and his team—which includes computer scientists, physicists, and surgeons—developed a snakelike robot that can help surgeons do the job.

The i-Snake (which stands for imaging-sensing-navigated, kinematically enhanced) robot has fully articulated joints, allowing the tool to move around obstacles just as a snake can. The joints are powered by micromotors, and the tip is fitted with multiple sensing and imaging mechanisms.

The i-Snake’s flexibility yields perhaps its biggest benefit: Surgeons can guide the tool into regions of the body that are hard to get at, with minimal cutting. “If you can navigate between natural anatomical planes, you don’t have to cut through muscles or cause inadvertent damage to structures such as the nerves—which makes recovery much better,” Yang says.

The robot requires just one incision, as opposed to the several used in today’s laparoscopies for inserting an endoscope and surgical tools. Using a joystick, the surgeon can digitally control the robot’s shape and movement inside the body.

“The i-Snake is not meant to compete with or replace the da Vinci robot, per se,” Yang says. “It is based on a different principle. We wanted to develop something that is more hands-on, as a smart instrument, rather than large machinery—similar to how computers are now (such as mobile devices) compared to what they were like more than 20 years ago.”

The i-Snake is about 12.5 millimeters in diameter and can have a variable length, typically about 40 centimeters long. It can be held by the surgeon or have its end docked on to a robotic arm fixed to the operating table. The robot has a hollow center through which surgeons insert different surgical tools.

“The i-Snake can increase surgeons’ perception and the consistency of their motor manipulation skills, ultimately improving the outcome of the surgical procedure,” Yang says.

Last year, a paper on the i-Snake won the Best Medical Robotics Paper Award at the IEEE International Conference on Robotics and Automation.

So far, the team has tested the i-Snake on animal subjects. Yang says he hopes to see the device in hospitals within five years. “I think we are close to that,” he says, adding that the robot could be used to perform gastrointestinal, gynecological, and cardiothoracic surgeries.

 

CHEAPER OPTIONS

The key to making surgical robots less expensive lies in their size, according to Yang. “When I design a robot,” he says, “my thinking is that if you cannot carry it in a case, then don’t bother making it. You want it to be compact and small.”

Another way to keep the price down is to make robots geared to specific tasks. “You don’t want to try to make a robot that can do everything,” Yang says.

Yang envisions that surgeons will one day have a fleet of small robots at their disposal, each to help with different tasks. “In the future,” he says, “you may have four or five robots in the operating room. One may be for dissecting delicate tissue, another for precision-controlled tissue ablation, and yet another for microscopic anastomosis.

 

“Robots are ultimately just very smart instruments. They can be used to enhance a surgeon’s vision, dexterity, or precision—all with less pain and trauma for the patient.”

 

NIST Revising Glossary of Infosec Terms

Defined Terms Found in NIST, Defense Dept. Publications

By Eric Chabrow, December 11, 2012.

http://www.govinfosecurity.com/nist-revising-glossary-infosec-terms-a-5347

Looking for a holiday gift for your boss who doesn’t quite understand information security lingo? The National Institute of Standards and Technology has one you can give, and it’s free.

NIST has issued a draft of Interagency Report 7298 Revision 2: NIST Glossary of Key Information Security Terms.

The glossary includes most of the terms found in NIST publications. It also contains nearly all of the terms and definitions from CNSSI-4009, an information assurance glossary issued by the Defense Department’s Committee on National Security Systems, a forum that helps set the federal government’s information assurance policy.

The publication contains 215 pages of definitions, from “Access” – the ability to make use of any information system resource – to “Zone of Control” – a three-dimensional space surrounding equipment that processes classified and/or sensitive information within which TEMPEST exploitation is not considered practical or where legal authority to identify and remove a potential TEMPEST exploitation exists. (TEMPEST is defined as a name referring to the investigation, study and control of compromising emanations from telecommunications and automated information systems equipment.)

“As we are continuously refreshing our publication suite, terms included in the glossary come from our more recent publications,” publication editor Richard Kissell writes. “The NIST publications referenced are the most recent versions of those publications. It is our intention to keep the glossary current by providing updates online. New definitions will be added to the glossary as required, and updated versions will be posted on the Computer Security Resource Center website.

NIST is seeking comments and suggestions on the revised glossary, and they should be sent by Jan. 15 to secglossary@nist.gov

 

 

Iran says it can make copy of captured CIA drone

The Associated Press

Posted on Wed, Dec. 12, 2012 03:31 AM

TEHRAN, Iran — A senior Iranian lawmaker says Tehran can now manufacture a copy of an advanced CIA spy drone captured last year.

Avaz Heidarpour, who sits on parliament’s national security committee, says experts have reverse-engineered the RQ-170 Sentinel craft and Iran now is capable of launching a production line for the unmanned aircraft.

His remarks were posted on the parliament’s website, icana.ir, on Wednesday.

Iranian officials frequently announce technological and military breakthroughs, most of which are impossible to confirm independently.

Heidarpour’s comment comes two days after Iran’s Revolutionary Guard said it decoded all data from the drone that went down in December 2011 near Iran’s eastern border with Afghanistan.

Last week, the Guard claimed it captured another U.S. drone after it entered Iranian airspace over the Persian Gulf.

Read more here: http://www.kansascity.com/2012/12/12/3961569/iran-says-it-can-make-copy-of.html#storylink=cpy

 

Is Windows 8’s Lack of Windows a Mistake?

Usability guru Jakob Nielsen says Microsoft’s new OS takes a giant step backward

IEEE Spectrum

BY Steven Cherry // Fri, December 07, 2012

 

Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum‘s “Techwise Conversations.”

Back in January, we had an article in Spectrum about whether Windows 8 could succeed in what we called Microsoft’s greatest challenge ever: writing a single operating system that would work not only on the desktops and notebooks of yesteryear but also the tablets and smartphones of tomorrow.

Microsoft first showed Windows 8 at a conference about a year ago. We quoted one attendee who was “blown away” by what he saw. He said, “They have what looks like two different operating systems side by side. And the part that took everybody by surprise was that you’re switching back and forth between them casually.”

It was a stunning technical achievement, but the question remained: Will it do the job for the millions of users who need to log into the billion or so Windows devices out there in the world?

Recently, software guru Jakob Nielsen gave Windows 8 a thorough vetting, with usability testing on both desktops and tablets. His verdict? Journalist Preston Gralla of Computerworld summed it up this way: “Windows 8 is bad on tablets and even worse on PCs. [Nielsen] blames dueling interfaces, reduced ‘discoverability,’ ‘low information density,’ and more.”

That sounds terrible. And if true, it will be terrible for the millions of people using millions of computers and mobile devices, 82 percent of which still run one version or another of the Windows operating system. It will also be terrible for Microsoft, if its bet-the-farm wager on Windows comes a cropper.

So I invited Jakob Nielsen to describe Windows 8, since most of us haven’t even spent any time with it yet, and to tell us just what’s wrong with it. He’s the cofounder, with another legendary software and interface expert, Don Norman, of the Nielsen Norman Group, in Fremont, Calif.; he’s the author of Useit.com, the website on which he published his Windows 8 usability report; and he’s my guest today by phone.

Jakob, welcome to the podcast.

Jakob Nielsen: Thank you, Steven.

Steven Cherry: Here’s your complete summary: “Hidden features, reduced discoverability, cognitive overhead from dual environments, and reduced power from a single-window UI and low information density.” And then you wrote, “Too bad.” Let’s take them in reverse order. “Low information density”—what’s that?

Jakob Nielsen: Well, that just means that you get relatively little information on the available screen space, and certainly for different sizes of screens they can give you different amounts of information. If you have a phone, that will inevitably have little information, but if you have a bigger screen and a tablet you want more, and if you have a really big screen like on a desktop you want even more. And so, having a design that works well for a phone will not work perfectly in a tablet, and it will work terribly on a PC. And the problem is when you just look at screen shots in, let’s say, advertising or marketing literature, the Windows 8 designs look rather pretty and colorful and nice—and you might even say clean, as opposed to cluttered, as a lot of older designs do. But the problem is that once you start using them, you discover that you get very little information on any given screen. Particularly on the big screens, there’s a lot of wasted space; everything is really big and bright but doesn’t actually tell you very much. And in the long run, which means the sustained daily basis, that is just not acceptable. That is not why people use computers.

Steven Cherry: So, the next one up was “reduced power from a single window.” Is that just more of the same problem?

Jakob Nielsen: It’s a related problem. It also stems from their big mistake, which is to try to have to do a single design for everything because a single window works perfectly on a phone—I mean, you have to have just one window when you just have that small a screen. On a tablet, I would say most of the time a single window is good on a tablet as well; you want that kind of full-screen environment and focusing and doing one thing at a time. Now, we scale up to the desktop computer, and that falls apart completely because you want to do multiple things. I mean, the reason the entire system is called Windows with an s, with a plural, is that it comes from the realization that the older approach of using computers with a full-screen design didn’t really work for the modern office environment: for the knowledge worker, for the power user. You know, we did user testing on Windows 8, and people had a very hard time doing tasks that involved doing more than one thing. Let’s say one thing we tested was a task that said, “You want to make a list of three possible things to go out to see”—so, like movies or concerts or whatever—and send that list to a friend. So, “I’m going to propose you a list of three different things to do”—that’s kind of the scenario. And that was very clunky. I mean, they could do it, but it was just too much work. That’s the type of things that the Windows computers should be able to do easily, but what I say is they shouldn’t call it Windows any more; they should call it Microsoft Window, in the singular, because it’s just one window. It’s not enough.

Steven Cherry: It’s kind of ironic. The very first version of Windows didn’t do windows; it could tile information, but it couldn’t actually switch between applications, and we’re kind of going back to that. And I guess there’s another irony, that it’s never been cheaper or easier to have a large screen or multiple screens attached to your computer, and that’s what most people do these days.

Jakob Nielsen: Completely. And that’s been the trend for the last 20 years or so, has been bigger and bigger monitors, because being able to see a lot of information at once is vastly superior to seeing some information at one point in time and then later some other information. Because that notion of switching environments or switching views presents a large burden on your short-term memory, to remember what you saw even just 5 seconds ago. It’s already weaker than just switching your eyeball and just looking at it. I should point out, by the way, that some people have kind of criticized my analysis, but they do actually allow you to have several windows if you go into the legacy mode. And on the one hand, that’s true, they do have a legacy mode. On the other hand, that almost kind of proves that the new design doesn’t work, that they feel the need to maintain a legacy mode. And also, that introduces its own set of usability problems because now you have two different user interfaces on the same computer, and you have to remember what you can do, where you have to switch between them. Again, just switching environments is in its own right cognitive overhead.

Steven Cherry: Yeah. That was the very next one on your list, “cognitive overhead from dual environments.” The one after that or before is “reduced discoverability.” I guess discovery is a good thing, so reducing it is bad—but what is it?

Jakob Nielsen: Well, discoverability means whether you can find out or discover what features or support you have available from the system at any given time, as opposed to having to remember it or just know it. And people are just not very good at remembering things, whereas they’re much better at noticing things and being reminded, “Oh yeah, I could do this.” And so, if things are visible, they’re much more likely to be used, and this was the last revolution in user interfaces. The big revolution in user interfaces was the graphical user interface that made that change to a much more discoverable user interface, because things were now represented on the screen by icons, by menus and so forth, as opposed to the older style of DOS and the Unix line-mode interface and so forth—command-line interfaces, where you had to just know what the commands are as opposed to being told what the commands are. And the history has shown that graphical user interfaces were, in fact, successfully used by vastly many more people than the people that were able to use the command-line interface. Now they’ve taken a lot of this away by hiding the icons, by hiding the menus, by making it that you had to remember, you had to, like, put your mouse in the upper right-hand corner to reveal things. Now sometimes you have to do this because of reasons of lack of space.

So again, if you think about designing for a phone, a very small screen, you cannot show all the icons, all the menus, at all times because then there would be no room left for the content.

Steven Cherry: Yeah. You mentioned the icons, and Microsoft has kind of moved away from icons in this environment, although as you say they are available in the legacy environment. The new environment uses tiles, and a friend of mine, another journalist colleague, Wayne Rash of eWeek, he reviewed the Nokia 810 phone recently, and it uses Windows Phone 8. He thought that the tiling was a great feature. Let me just read you what he wrote:

Once you get used to the tiles, they are as intuitive as the icons on Android and iOS devices, and more useful. There’s less wasted space on the screen and in many cases the tiles include live content. For example, when I downloaded the WeatherBug app, the tile on the start page gives me the current conditions for my location. With the iOS version of the app, I get a WeatherBug icon. Seeing the current conditions at a glance is more useful.

Is it possible Microsoft created a great smartphone operating system that’s a terrible computer and tablet OS?

Jakob Nielsen: I think that’s probably the real story behind it—that they knew they were in trouble on the phones, where Apple and Android have really been dominating. And so they probably emphasized that, to the detriment of their vast traditional customer base of all the business users and even also the home users as well. Going back to the tiles, I think that’s a good example of it’s not 100 percent clear-cut that user interface design is something that’s good or bad, because I do think the tiles have some benefits. Because first of all the tiles are rather big, and that means that they are easy to touch, so on a phone or a tablet that’s a great advantage; you don’t want these tiny little things that are very error prone to touch. So that’s good: the tiles are easy to touch. That’s a great benefit. Secondly, as pointed out in the quote you just read, they have what’s called “live tiles,” that the information inside the tile will update, whereas an icon tends to look the same no matter what. And the example of the weather forecast is the example of good use of a live tile; unfortunately, [there are] a lot of bad uses as well. This is not something that’s inherent in the system; this is something that’s due to, maybe, sort of exuberant or overly excited designers who say, “Wow, we have this new feature—we have to use it!” No, you don’t have to use every possible feature at your disposal.

Steven Cherry: Ironically, Apple seems to be slowly converging iOS, its phone and tablet operating system, and OS X, its computer operating system. Do you think that it can pull that off, or do you think it would fall prey to some of the same problems you found in Windows 8?

Jakob Nielsen: Well, if it’s 100 percent conversion, then I think it would be a mistake because they are different platforms hardware-wise and therefore also user interface–wise. A desktop computer, a tablet, and a phone—they are three different things. Phones and tablets are relatively similar—both portable, both sort of small, and both touch screen–driven—but there’s quite a large difference from those two up to the desktop. And so if you have identically the same user interface…which actually even Microsoft doesn’t quite do that; there’s a few differences in the gestures to be used between the touch screen design and the mouse-driven design. But they’re essentially the same. So if you try to do things that are identical for two very different platforms, you will not optimize for either one. And I think Microsoft tried to almost optimize for the mobile scenario, and that’s why their desktop design falls through so bad. In the case of Apple, who knows what they would do. They might try and do a little bit of a compromise, which would also be bad for both platforms.

I think what one should do is to rather recognize there’s a lot of differences between the platforms, and therefore there should be a lot of differences in the user interfaces as well. On the other hand, there can also be many similarities. As an example, in the visual language, if you’re going to have an icon for, say, search, you might as well use the same icon everywhere so it’s easy to recognize. Or to take an example, one of the good things Microsoft did [was] they introduced something they call “charms,” which are generic commands, which are ubiquitous, always present features that work on everything, and search is one of those. So there’s always search, and it’s always available on the right hand of the screen—if you remember it, because it’s hidden. So that’s good. It’s good that it’s always available, and they might as well always use the same icon, always have search run in about the same way.

Steven Cherry: Very good. Well, it takes experts to write these systems for Microsoft and Apple and Android, and it takes experts to evaluate them, so on behalf of all users, thanks for testing Windows 8 and thanks for telling us about it.

Jakob Nielsen: You’re welcome. It’s good to get a chance to tell you what happened when we got some real users to try it out for real.

Steven Cherry: We’ve been speaking with Jakob Nielsen about how real users are finding Windows 8 on many different devices.

For IEEE Spectrum‘s “Techwise Conversations,” I’m Steven Cherry.

Announcer: “Techwise Conversations” is sponsored by National Instruments.

This interview was recorded 27 November 2012.

 

Cybercrime Economy

Massive bank cyberattack planned

By David Goldman@CNNMoneyTech

December 13, 2012: 12:03 AM ET

NEW YORK (CNNMoney)

 

Security firm McAfee on Thursday released a report warning that a massive cyberattack on 30 U.S. banks has been planned, with the goal of stealing millions of dollars from consumers’ bank accounts.

McAfee’s research upheld an October report from RSA, the security wing of IT giant EMC Corp (EMC, Fortune 500).

RSA startled the security world with its announcement that a gang of cybercriminals had developed a sophisticated Trojan aimed at funneling money out of bank accounts from Chase (JPM, Fortune 500), Citibank (C, Fortune 500), Wells Fargo (WFC, Fortune 500), eBay (EBAY, Fortune 500) subsidiary PayPal and dozens of other large banks. Known as “Project Blitzkrieg,” the plan has been successfully tested on at least 300 guinea pig bank accounts in the United States, and the crime ring had plans to launch its attack in full force in the spring of 2013, according to McAfee, a unit of Intel (INTC, Fortune 500).

Project Blitzkrieg began with a massive cybercriminal recruiting campaign, promising each recruit of a share of the stolen funds in exchange for their hacking ability and busywork. With the backing of two Russian cybercriminals, including a prominent cyber mafia leader nicknamed “NSD,” the recruits were tasked with infecting U.S. computers with a particular strain of malware, cloning the computers, entering stolen usernames and passwords, and transferring funds out of those users’ accounts.

The scheme was fairly innovative. U.S. banks’ alarm bells get tripped when customers try to access their accounts from unrecognized computers (particularly overseas), so banks typically require users to answer security questions. Cloning computers lets the cybercriminals appear to the banks as though they are the customers themselves, accessing their accounts from their home PCs — thereby avoiding the security questions.

And since most banks place transfer limits on accounts, recruiting hundreds of criminals to draw smallish amounts out of thousands of accounts is a way to duck those limits. The thieves could collectively siphon off millions of stolen dollars.

As terrifying as that sounds, the fact that the project is out in the open is a huge deterrent. RSA first uncovered the scheme in the fall, and independent security researcher Brian Krebs linked the report to NSD in the following days. Since then, the project appears to have gone dark.

NSD has effectively disappeared from chat forums, Krebs told CNNMoney.

“I can’t find him anywhere,” Krebs said. “Either bringing this to light scuttled any plans to go forward, or it’s still moving ahead cautiously under a much more protective cover.”

In either case, knowing what they’re up against could be a blessing for banks. McAfee said it is coordinating with law enforcement officials and working with several banks to prepare them for the potential attacks.

The financial industry is accustomed to fending off skilled cyberthieves. It gets hit every day by thousands of attacks on its infrastructure and networks, according to Bill Wansley, a senior vice president at Booz Allen Hamilton who specializes in cybersecurity issues.

Those are just the attacks that get discovered. Not a single financial industry network that Booz Allen examined has been malware-free, he noted.

“If you catch something early on, you can minimize the threat,” Wansley said. “It’s definitely worthwhile to get a heads up.”

For example, in September an Iranian group claiming to be the “Cyber Fighters of Izz ad-Din al-Qassam” announced that it would launch a major denial-of-service attack against the largest U.S. banks. Few took the threat that seriously, but Booz Allen took advantage of the heads-up to work with some of the targeted banks.

What followed was the largest direct denial-of-service attack ever recorded, preventing the public from accessing the websites of Chase, Bank of America (BAC, Fortune 500), Wells Fargo, US Bank (USB, Fortune 500) and PNC Bank (PNC, Fortune 500) — intermittently for some, and as much as a day for others. The banks that were better prepared were the least affected, he said. (Who actually sponsored the attacks remains a subject of debate. Security experts believe the Iranian government had a hand in them.)

The Cyber Fighters are at it again, declaring that they will be launching attacks on banks’ websites this week as part of “Operation Ababil.” The banks are preparing.

“Security is core to our mission and safeguarding our customers’ information is at the foundation of all we do,” said Wells Fargo spokeswoman Sara Hawkins. “We constantly monitor the environment, assess potential threats, and take action as warranted.”

Citi, Chase, and PayPal did not respond to requests for comment.

Still, the war against cybercriminals isn’t going so well for the financial industry. In July, threat detection software maker Lookingglass found that 18 of 24 of the world’s largest banks were infected with popular strains of malware that the industry believed had been eradicated, suggesting that banks are prone to re-infections. In June, McAfee uncovered “Operation High Roller” — a cyberattack that could have stolen as much as $80 million from more than 60 banks.

Since consumers are federally protected from taking the hit when funds are stolen from their accounts, the banks eat the loss. And as the attacks grow more sophisticated, their annual price tag keeps rising.

“There are absolutely attacks going on right now that we don’t know about, some of them minor, some major,” Wansley said. “There’s a lot going on out there, and frankly, we’re only seeing the frequency and severity pick up.”

First Published: December 13, 2012: 12:03 AM ET

 

 

Fiscal cliff negotiators take aim at COLA adjustments, sources say

FedTimes

By STEPHEN LOSEY

December 12, 2012

Federal employee groups fear that a change in the way the government sets cost-of-living adjustments is growing increasingly likely to be part of a deal to avoid sequestration.

The new method of determining the Consumer Price Index, called the chained CPI, would lower the COLAs for federal retirees’ pensions, as well as Social Security benefits, military pensions, and other indexed portions of the government’s budget. The change would at first only mean a few hundred dollars less per year for federal retirees. But it would compound over the years and decades until eventually, retirees would likely earn tens of thousands of dollars less than they would under the current method of setting COLAs.

The chained CPI is usually 0.25 to 0.30 percentage points lower each year, on average, than the standard CPI measurements.

However, the switch could save the government more than $290 billion over the next decade, according to a paper released Tuesday by the Moment of Truth Project, which is co-chaired by former White House Chief of Staff Erskine Bowles and former Sen. Alan Simpson. Simpson and Bowles also pushed for a transition to the chained CPI in 2010 when they headed the White House’s deficit reduction commission.

Jessica Klement, legislative representative for the National Active and Retired Federal Employees Association, told reporters during a conference call that there is more talk on Capitol Hill recently about adopting the chained CPI method of calculating inflation.

“This is getting a lot of traction in deficit reduction talks just because of the amount of money it saves,” Klement said.

But Klement and other representatives of federal employees also noted that the chained CPI is somewhat confusing and hard to explain, which may mute protests from the public if the government adopts it.

“This is an arcane way to raise taxes and cut benefits and raise a lot of revenue,” said Bruce Moyer, chair of the Federal-Postal Coalition, an organization of more than two dozen federal union, management and retiree groups. The chained CPI would also mean tax bracket thresholds would increase more slowly, which the Moment of Truth Project said would alone generate an extra $62 billion nationwide over a decade.

Federal employee groups object to the chained CPI, and note that federal employees have already contributed $103 billion to deficit reduction over the next decade. The current pay freeze has already cost feds $60 billion, the coalition said, and a reduced and delayed raise next year will cost them $28 billion more.

In addition, an increase to pension contributions for newly hired federal employees is set to take effect next year and will cost another $15 billion, the coalition said.

Feds are the only group of Americans that so far has been asked to sacrifice for deficit reduction, the group said, and the government should raise taxes on millionaires and billionaires before asking middle-class government employees to contribute more.

Rep. Chris Van Hollen, D-Md., told reporters during the call that he hasn’t heard about any proposals specifically targeting federal employees being discussed during President Obama’s fiscal cliff negotiations with House Speaker John Boehner. But Van Hollen expects Republicans to push steep increases to current federal employees’ pension contributions, which would likely amount to a 5 percent pay cut.

Pentagon Warns: ‘Pervasive’ Industrial Spying Targets U.S. Space Tech

Wired.com

Danger Room

By Robert Beckhusen

12.13.12

http://www.wired.com/dangerroom/2012/12/space-espionage/?utm_source=Contextly&utm_medium=RelatedLinks&utm_campaign=Interesting

 

In 2011, two Chinese nationals were convicted in federal court on charges of conspiring to violate the Arms Control Export Act after attempting to buy thousands of radiation-hardened microchips and sell them to China. The day the pair were sentenced to two years in prison for the plot, the U.S. Attorney for the Eastern District of Virginia, Neil MacBride, called it an example of how “the line between traditional espionage, export violations and economic espionage has become increasingly blurred.”

It’s also an example of the increasing number of military and space technology espionage cases being uncovered in the U.S. each year, according to a new report from the Defense Security Service, which acts as the Pentagon’s industrial security oversight agency. According to the report, first noted by InsideDefense.com, industrial espionage has grown “more persistent, pervasive and insidious” (.pdf) and that “regions with active or maturing space programs” are some of the most persistent “collectors” of sensitive radiation-hardened, or “rad-hard” microchips, an important component for satellites. And now with North Korea having successfully launched its first satellite, it’s worth taking a close look.

The report doesn’t single out any country for space-tech espionage, lumping the suspected origins of espionage plots together into regions such as East Asia and the Pacific. But according to the report, many espionage attempts arising in Asia reflect “coordinated national strategies” by governments that “perceive themselves as being surrounded by threats, including from each other.” Because of this, these governments desire to upgrade their armies and make themselves more self-sufficient. Front companies originating in Asia and involved in espionage have also attempted to sell technology to countries that are — wink — “hostile to U.S. interests.”

If it’s China the DSS is referring to as a “hostile” country, then it’s a bit unusual. As a rule, the U.S. normally takes pains not to characterize China, or most countries, like that, with exceptions such as North Korea, Iran and Syria.

Still, these are only hints, and it’s difficult to pinpoint exactly where these cases are coming from. Twenty-three percent of espionage attempts from East Asia were “attributed to cyber actors and were non-specific in nature.” Attempts to acquire technology through front companies is also difficult to track. Governments and militaries in Asia use “complex and very opaque systems” to acquire American technology, and it can be hard to establish the identity of a government behind a shady front company with no specific connections.

These cases also have often little to do with classic espionage — like infiltrating spies into the Defense Department — or even smuggling. Instead, most are the seemingly more mundane ways to steal military secrets, such as seeking technology directly from the suppliers and then exporting the technology without a license.

In other words, the “spies” just ask defense, aerospace and technology companies for what they want, and hope the companies don’t ask too many questions. The spies also frequently appear to be representatives of what seem to be otherwise legitimate companies, but are actually fronts. They file Request for Information paperwork (or RFIs) to get details from the government about various technologies. There’s also a growing amount of “suspicious network activity” that can include malicious programs to infect sensitive databases.

Nowhere is this more true than for rad-hard microelectronics. These chips are frequently used in space, as they’re built with a greater number of transistors than other microchips, which helps protect them against the onslaught of extra-atmospheric radiation while in orbit. They’re super important to satellites and NASA space missions, for one. There’s also a growing number of cases targeted against other space technologies used in “processing and manufacturing” and directed-energy systems. In 2011, reports collected by the DSS on attempts to acquire sensitive rad-hard electronics increased by 17 percent, a pretty sizable jump.

It’s worth not overstating the espionage cases as a whole, though. Espionage cases against technologies that are targeted most often — information systems, lasers and optics, aeronautics and electronics — have not increased. But there is an increase in the overall number of reports. Some of that is probably just due to greater reporting of cases, and not necessarily more espionage. According to the DSS, the number of case reports increased by 65 percent from 2010 to 2011. The number of these reported cases that turned into “suspicious contact reports” increased by 75 percent, though. But that may just mean the DSS is getting better at spotting the espionage. And the only consistent data is a “relentless upward trend” in the number of cases.

The other question is how the espionage attempts break down across regions. It’s probably not surprising that the Asia-Pacific region counts for most: some 42 percent. The Defense Security Service also thinks it’s very likely the attempts to seize rad-hard chips will continue to increase, as “the perceived need within this region for modern militaries combined with growing economies will very likely fuel the continued targeting of U.S. technologies,” the report notes. Combined with the Near East — or the Middle East and North Africa — the number jumps to 61 percent. The rest largely come from Europe and the former Soviet bloc.

It also shows just how spycraft is often rather humdrum. As opposed to the fantasy image of spies, the reality is often — like the two Chinese nationals arrested for violating an arms embargo — as simple as calling up a company for information. The result is that company’s trade secrets ending up in China or worse, North Korea, which is a nightmare for any business owner. But for the U.S. government, it’s a serious threat to national security.

 

2012 best and worst places to work in the federal government

 

The Washington Post. Published Dec. 13, 2012.

In recent years, federal workers have seen their salaries frozen and find themselves at the center of a partisan debate over the value of their work. But some agencies have managed to keep their employees happy. Here are the federal government’s best and worst places to work, ranked by the Partnership for Public Service. The results are based on the annual Office of Personnel Management survey, sent this year to 2 million employees. Visit bestplacestowork.org to see how your agency stacks up. Read related article.

Best large agencies

1. 

National Aeronautics and Space Administration 

2. 

Intelligence Community 

3. 

Department of State

4. 

Department of Commerce 

5. 

Environmental Protection Agency 

Worst large agencies

1. 

Department of Homeland Security 

2. 

Department of Veterans Affairs 

3. 

Department of Agriculture (tie) 

3. 

Department of Labor (tie) 

5. 

Office of the Secretary of Defense, Joint Staff, Defense Agencies, and Department of Defense Field Activies

Best mid-size agencies

1. 

Federal Deposit Insurance Corporation 

2. 

Government Accountability Office 

3. 

Nuclear Regulatory Commission (tie) 

3. 

Smithsonian Institution (tie)

5. 

Federal Trade Commission 

Worst mid-size agencies

1. 

Broadcasting Board of Governors 

2. 

National Archives and Records Administration 

3. 

Department of Housing and Urban Development 

4. 

Securities and Exchange Commission 

5. 

Department of Education 

Best small agencies

1. 

Surface Transportation Board 

2. 

Congressional Budget Office 

3. 

Federal Mediation and Conciliation Service 

4. 

Peace Corps 

5. 

National Endowment for the Humanities 

Worst small agencies

1. 

Office of the U.S. Trade Representative 

2.

Federal Maritime Commission 

3. 

Federal Election Commission 

4. 

Federal Housing Finance Agency 

5. 

Millenium Challenge Corporation 

SOURCE: Partnership for Public Service.

 

 

Turner named to head key committee

Dayton Daily News

Posted: 4:44 p.m. Thursday, Dec. 13, 2012

By Ellen Jervell

Washington Bureau

 

WASHINGTON – Rep. Mike Turner was named Thursday to head a House panel that will give him greater oversight of Wright-Patterson Air Force Base and the long-endangered Lima Tank Plant.

Turner will serve as chairman of the House Armed Services tactical air and land subcommittee, which has jurisdiction over programs for the Army and Air Force as well as all Navy and Marine Corps aviation programs.

Referring to “looming defense cuts” in the coming years, Turner said in a statement that “this subcommittee places me in a role to continue my strong advocacy for the men and women at Wright-Patt, the Lima Tank Plant, and a number of other facilities which preserve the safety and security of our nation.”

Turner, R-Centerville, will leave his current chairmanship at the Strategic Forces subcommittee of the armed services committee.

 

 

Rice abandons State bid; Hagel could lead Pentagon

Federal Times

Dec. 13, 2012 – 05:35PM |

By JOHN T. BENNETT

http://www.federaltimes.com/article/20121213/DEPARTMENTS01/312130003

 

Susan Rice has withdrawn from consideration to be secretary of state, a move that could pave the way for former GOP Sen. Chuck Hagel to become defense secretary.

The White House announced Rice’s decision in a statement on Thursday, with President Obama praising her efforts as U.N. ambassador and blasting Republicans who have been sharply critical of Rice. The announcement set off a firestorm in Washington, with speculation flying about what the decision means for Obama’s second-term Cabinet.

Obama praised Rice as an “extraordinarily capable, patriotic, and passionate public servant” who has played “an indispensable role in advancing America’s interests.”

The president also responded to recent criticism from GOP lawmakers about Rice, and comments she made after a deadly attack on a U.S. diplomatic facility in Libya that later were proved inaccurate: “I deeply regret the unfair and misleading attacks on Susan Rice in recent weeks.”

Those GOP senators had vowed to block her nomination if Obama tapped her for secretary of state.

One, Sen. Lindsey Graham, R-S.C., issued a statement minutes after the White House announcement expressing his “respect” for Rice’s decision. Graham also said he is “determined to find out what happened before, during, and after the attack” at the Benghazi diplomatic facility.

Rice’s withdrawal could set up a scenario under which Senate Foreign Relations Committee Chairman John Kerry, D-Mass., becomes Obama’s pick for secretary of state. Kerry had been mentioned as a possible defense secretary pick, but if he goes to Foggy Bottom, insiders say Hagel likely would replace Leon Panetta as defense secretary.

The White House’s announcement set off immediate buzz around Washington.

Christopher Preble of the CATO Institute, in a statement released shortly after the Rice news was revealed, said speculation that Hagel would become defense secretary “should be welcomed by anyone frustrated by years of war and foreign meddling, and out-of-control spending at the Pentagon.”

Hagel, a former Nebraska senator, is now chairman of the Atlantic Council.

During his 12 years in the upper chamber, he served on the Intelligence and Foreign Relations committees, among others.

Hagel has the respect of both Republicans and Democrats, and insiders say his nomination likely would sail through the Senate with little resistance — unlike Rice.

In recent years, Hagel has been a co-chair of the President’s Intelligence Advisory Board and is a member of the Defense Policy Board, which reports directly to the secretary of defense. Hagel also is member of several corporate boards for major corporations such as Chevron.

 

 

10 Smartest Cities in North America

GovTech.com

December 13, 2012 By News Staff

http://www.govtech.com/10-Smartest-Cities-in-North-America.html

 

Co.Exist recently published a list of the top 10 smartest cities in North America. The rankings are based on measurements of six components: people, economy, government, environment, lifestyle and mobility. Each component is measured with a number of related data sources. For example, smart governance was evaluated based on a not-yet-released e-governance ranking conducted by Rutgers University, and the Center for Digital Government’s 2012 Digital Cities Survey.*

 

Here are the top 10 overall smartest North American cities based on Co.Exist’s evaluation system:

•Boston

•San Francisco

•Seattle

•Vancouver

•New York City

•Washington D.C.

•Toronto

•Chicago

•Los Angeles

•Montreal

 

 

While these are the overall rankings, individual category rankings were somewhat varied. New York City was ranked first when it came to smart mobility, thanks to the city’s open data efforts and events like NYC Big Apps.

Seattle ranked first in smart governance, based on the city’s use of RFIDs to track waste as well as its use of Twitter to communicate about stolen vehicles. Seattle Police also announced a program in Oct. that allows residents to monitor crime in realtime through a program called Tweets-by-Beat.

San Francisco ranked first in smart environment, thanks to high rankings in energy, buildings, waste and air quality measurements. The city also tied for first place with Boston for smartest people, as the city is a known center of innovation, home to organizations such as Code For America.

 

* Editor’s Note: The Center for Digital Government is an advisory and research organization operated by e.Republic, Government Technology’s parent company.

 

How to Shrink the Defense Budget and Come Out Winning

 

By WILLIAM MATTHEWS, The Fiscal Times December 14, 2012

http://www.thefiscaltimes.com/Articles/2012/12/14/How-to-Shrink-the-Defense-Budget-and-Come-Out-Winning.aspx?p=1

 

The U.S. military is going to shrink, that much is clear. Whether it will be harmful or helpful is less certain.

Shedding unneeded troops, surplus bases and 20th century weapons could be a good thing. Deputy Defense Secretary Ashton Carter envisions a U.S. military that is “leaner, but agile, ready, technologically advanced.”

Critics of the coming cuts fear a different outcome. They worry that the United States will field an anemic force ill-prepared to confront increasingly sophisticated adversaries with growing arsenals of precision munitions, stealth aircraft, cyber weapons and anti-satellite missiles and lasers.

Carter envisions greater reliance on highly-trained special operations forces and on high technology – including “new capabilities, novel capabilities that we haven’t revealed yet,” he told a Duke University audience Nov. 29. Critics like Sen. John McCain (R-AZ) fear “a swift decline of the United States as the world’s leading military power.”

To many, the U.S. military is still living large on a budget swollen by 11 years of war. The 2012 defense budget of $646 billion has declined from the war-spending peak of $691 in 2010. But the 2012 budget, which is still in effect because Congress hasn’t passed the 2013 spending bill, is more than double the pre-war $316 billion budget of 2001.

Defense now consumes almost 20 percent of the U.S. budget. Only Social Security at $773 billion and health care programs such as Medicare, Medicaid and children’s health insurance at $838 billon, cost more.

With the Iraq war over and Afghanistan winding down, many wonder why defense spending can’t be cut dramatically. It has been in the past. Defense spending plunged 43 percent after the Korean War, 33 percent after Vietnam and 36 percent after the Cold War, according to the Center for Strategic and International Studies.

“How much deeper can defense cuts responsibly be?” asks defense scholar Michael O’Hanlon of the Brookings Institution.

The 2013 defense budget is likely to be a bit lower than 2012. The House version calls for $635 billion; the Senate approved $632 billion. And that downward trend seems likely to drop markedly if automatic cuts called sequestration kick in Jan. 2.

Sequestration will cost the military $492 billion over the next 10 years – $56.5 billion in 2013. Congress and the president are struggling to prevent those cuts, which they approved last year in the Budget Control Act, hoping to give themselves an incentive to come up with a better deficit reduction plan.

Sequestration would cut far too much, says O’Hanlon. The Defense Department is already struggling to cut $60 billion through new efficiencies and by eliminating waste, he said. The department also faces $487 billion in cuts over ten years imposed by spending cap limits set by the Budget Control Act. Those cuts would slow growth in the defense budget to the rate of inflation, but not reduce defense spending.

Given the national budget crisis, O’Hanlon says defense can be cut more. In a Dec. 11 address to the military advocacy group Concerned Veterans for America, O’Hanlon said additional cuts would be “a little risky, and a little painful,” but they should be made to help control the federal deficit and fix the U.S. economy.

He joins a range of other defense experts who say deeper defense cuts can, indeed, be made.

“I tried to identify specific defense savings that I believe we can responsibly make,” he said. “When I go through my list I can find ways to save maybe $100 billion over the next 10 years, maybe $150 billion. I’m doing this in a cautious way.” He stressed that deeper military spending cuts “only make sense in the context of broader national deficit reduction and fiscal reform.”

Speaking to the same group, Sen. Lindsey Graham, R-S.C., tentatively endorsed O’Hanlon’s plan.

“I’m one of the strongest defense hawks in the Congress,” Graham said. But “if we could come up with an entitlement reform deal that saves Social Security and Medicare and deals with Medicaid and sets spending limits that are sustainable, I would entertain going past $487 billion” in defense cuts over the next decade.

 

Sequestration – the $492 billion in automatic, across-the-board cuts set to begin in January – is out, Graham said. “It’s a dumb way to reduce defense spending. I’m disappointed in the Republican Party for signing up for sequestration. While agreeing with O’Hanlon that $100 billion to $150 billion could be cut from defense, Graham stressed that American military power must remain “superior to anything on the planet at all times,” and he complained that current defense spending, which equals about 5 percent of gross domestic product, is historically low for a nation at war.

Acknowledging that additional defense cuts may be necessary – and would not be calamitous – puts the moderate O’Hanlon and the conservative Graham in the same ballpark as Lawrence Korb, a former assistant secretary of defense and current senior fellow at the liberal-leaning Center for American Progress. Korb, too, is calling for an additional $100 billion in defense cuts.

But where O’Hanlon sees acceptable short-term military risk, Korb sees merely “a smart first step” toward reining in a decade of excessive defense spending.

The $487 billion in cuts imposed last year won’t actually cut defense spending at all, Korb writes in a Dec. 6 report. They’re cuts from “projected increases in the defense budget” and will “essentially keep the defense budget steady at its current level, adjusted for inflation, over the next five years, before allowing a return to moderate growth thereafter.”

By contrast, the much bigger cut of sequestration would reduce defense spending to its 2007 level, Korb said in an interview. That year, the base defense budget was $472 billion and funding for the wars in Iraq and Afghanistan was $171 billion.

Returning to a 2007-size budget in 2013 “is doable,” Korb said. “If you can’t run the Pentagon on $500 billion, then something’s wrong.”

Whether risky or too reserved, what would cutting another $100 billion mean for the military?

Almost certainly, there will be fewer ground forces. The Army and Marine Corps grew 15 percent during the Iraq and Afghan wars, and current plans call for returning them to pre-war levels. They could go a bit lower, O’Hanlon said. The threats the United States is likely to face in the near future require naval, air and cyber forces rather than soldiers and Marines, he said.

Korb advises simply cutting ground forces to pre-war levels: shrink the Army from 547,000 active-duty troops today to 490,000, and shrink the Marine Corps from 209,000 to 189,000. That would save $16.6 billion over a decade, he says.

 

Cut the Joint Strike Fighter program. The plan now is to buy 2,500 planes, but half that number would suffice to counter the threats posed by Iran and China, O’Hanlon said. Refurbished F-16s and more drones could help make up the difference. North Korea’s launch of a long-range missile launch on Wednesday underscored the fluidity of the threat.

Korb would eliminate the Navy’s purchase of 237 Joint Strike Fighter and instead buy 240 “effective and affordable F/A-18E/Fs,” saving $16.2 billion over 10 years.

Reduce the inventory of nuclear weapons, O’Hanlon advises. “There are more economical ways to maintain the triad.” For example, cut the ballistic missile submarine fleet from 14 to eight and load each sub with more missiles. “You don’t save a lot of money – a billion, two, maybe three billion a year, but it’s worth looking at.”

Korb would reduce the number of deployed nuclear weapons from 1,722 to 1,100 by 2022 to save at least $28 billion over 10 years. “Our nuclear arsenal is expensive to maintain and largely useless in combating the threats facing the nation today,” Korb writes. Even the Pentagon concedes that deterrence goals can be achieved with a smaller nuclear force, he said.

O’Hanlon said he would agree to “a slightly smaller Navy” as long as ships were kept at sea longer and crews were rotated on and off every six months. That way the Navy could get more use out of fewer ships without wearing out its sailors, he said. Aircraft carriers would be exempted from that program.

 

Korb would squeeze another $40 billion from the budget over 10 years by reforming “outdated health care programs.” Health care for troops, their families and military retirees cost the Pentagon $51 billion in 2012, nearly triple the $19 billion price tag in 2011. Health care is one of the fastest growing sectors of the defense budget. The Congressional Budget Office estimates it could rise to $65 billion by 2017. In 2011, then Defense Secretary Robert Gates said,

“Health care costs are eating the Defense Department alive.”

While endorsing the idea of additional cuts, Graham offered few specifics. “We can try to do more with less. We can have smaller land forces if they have more capability,” he said.

Stealthy Joint Strike Fighters “can probably do things that F-15s and F-18s can’t by a factor of five,” Graham said. But there are limits to reducing the number of less capable aircraft without creating a “coverage problem” – that is, having too few planes to cover too many trouble spots, he said.

And any cuts beyond the $487 billion already scheduled should also probably wait until it is clearer how the war in Syria and tension over Iran’s nuclear program are going to turn out, he said. “The budget has to reflect fact that there’s no substitute for American military power in 21st Century.” Graham said.

 

Whoa: Physicists testing to see if universe is a computer simulation

By Eric Pfeiffer, Yahoo! News | The Sideshow

December 13, 2012

 

Could this be a computer simulation? (Space.com)

Will you take the red pill or the blue pill?

Some physicists and university researchers say it’s possible to test the theory that our entire universe exists inside a computer simulation, like in the 1999 film “The Matrix.”

In 2003, University of Oxford philosophy professor Nick Bostrom published a paper, “The Simulation Argument,” which argued that, “we are almost certainly living in a computer simulation.” Now, a team at Cornell University says it has come up with a viable method for testing whether we’re all just a series of numbers in some ancient civilization’s computer game.

Researchers at the University of Washington agree with the testing method, saying it can be done. A similar proposal was put forth by German physicists in November.

So how, precisely, can we test whether we exist? Put simply, researchers are building their own simulated models, using a technique called lattice quantum chromodynamics. And while those models are currently able to produce models only slightly larger than the nucleus of an atom, University of Washington physics professor Martin Savage says the same principles used in creating those simulations can be applied on a larger scale.

“This is the first testable signature of such an idea,” Savage said. “If you make the simulations big enough, something like our universe should emerge.”

The testing method is far more complex. Consider the Cornell University explanation: “Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences.”

To translate, if energy signatures in our simulations match those in the universe at large, there’s a good chance we, too, exist within a simulation.

Interestingly, one of Savage’s students takes the hypothesis further: If we stumble upon the nature of our existence, would we then look for ways to communicate with the civilization who created us?

University of Washington student Zohreh Davoudi says whoever made our simulated universe might have made others, and maybe we should “simply” attempt to communicate with those. “The question is, ‘Can you communicate with those other universes if they are running on the same platform?'” she asked.


 

From → Uncategorized

Comments are closed.

%d bloggers like this: