Skip to content

July 18 2015

July 20, 2015


18 July 2015

Newswire

Blog URL https://newswirefeed.wordpress.com/

 

 

Drone Wars: Airspace and Legal Rights in the Age of Drones

http://www.suasnews.com/2015/07/36938/drone-wars-airspace-and-legal-rights-in-the-age-of-drones/

by Press • 2 July 2015

By Gary Wickert

 

It is only the tip of the iceberg. As technology advances, common citizens are increasingly finding themselves with the ability to obtain and fly reasonably-priced unmanned aerial vehicles (UAVs) known as drones. News broadcasts are only now beginning to reflect the growing problems we can anticipate as their use becomes more and more common, both privately and commercially. Armed with high-definition cameras, these civilian UAVs have ranges of up to several miles and can hover over a neighbor’s pool party and capture footage of activity engaged in with an expectation of privacy. The typical quadcopter has a flight time of 15 minutes, although smaller ones with tiny Photron FASTCAM viewer (FPV) cameras might be able to manage as much as 30 minutes. The proliferation of these amazing devices brings with it a whole host of legal issues which most assuredly will give rise to civil disputes and litigation. Understanding the laws affecting their use now becomes a prerequisite as opposed to an opportunity for an interesting lunch conversation.

Congress has charged the Federal Aviation Administration (FAA) with the responsibility for setting rules and regulations regarding the operation of drones. On February 15, 2015, the FAA released its Notice of Proposed Rulemaking (NPRM) for small unmanned aircraft. This is the first step in changing regulations and rules currently in place. Right now, however, federal, state, and local laws are in flux, and do not adequately govern the rights of citizens either operating drones or being victimized by them. In Pittsburgh, a drone flew over a professional baseball game. In Los Angeles, hockey fans entering a sports arena successfully knocked a harassing drone out of the sky. A Seattle woman observed a drone outside her high-rise window videotaping her as she dressed. In Nashville, a drone interfered with a July 4th fireworks display. In all of these instances, it is unclear whether the drone operators could be held criminally or civilly liable simply for flying their drones over private property. There exists a great deal of controversy regarding even the FAA’s authority to govern their usage. While it has the ability to prevent drone use near airports, but whether it has the same authority elsewhere remains in dispute.

The right of a landowner to control the low-altitude space immediately over his private property appears to be in conflict with the right of a drone owner to operate a drone in the same airspace. Prior to the Wright Brothers, 19th Century law followed the Latin maxim,Cujus est solum, ejus est usque ad coelum (“To whomever the soil belongs, he also owns the sky”). After the dawn of aviation, however, Congress passed the Air Commerce Act of 1926 and the Civil Aeronautics Act of 1938, authorizing flight within “navigable airspace” – airspace later defined to be over 500 feet above ground level. In 1946, the U.S. Supreme Court confirmed that a landowner has a right to prevent “intrusions of airspace” just as he does invasions on the ground, and that he owned “at least as much of the space above the ground as he can occupy or use in connection with the land.” U.S. v. Causby, 328 U.S. 256 (1946). It held that government flights which were so low and frequent as to interfere with the enjoyment of the land constituted a “taking.” It did not, however, clarify how much of the space below the 500-foot FAA ceiling belonged to the landowner. However, conflicts between landowners and air travelers were fairly uncommon. The proliferation of ultra-light aircraft over the past decade has exacerbated the issue somewhat, but, with the growth of drone use, it is clear that the debate has been reignited and will need to be addressed formally.

In as much as today’s laws provide no definite ceiling on a landowner’s airspace, whether and where drones may and may not fly, remains uncertain. Is a flight 100 feet above private property allowable? What about 10 feet? When does it become trespass? The U.S. Supreme Court has already told us that a pilot’s naked-eye surveillance of private property below is legal and does not constitute an invasion of privacy. California v. Ciraolo, 476 U.S. 207 (1986). This open airspace is simply a “public vantage point” which the government, law enforcement, or a private citizen can observe from above. Helicopters are not required to stay above the 500-foot navigable airspace floor, so the Supreme Court tells us that their observation from as low as 400 feet is legal. Florida v. Riley, 488 U.S. 445 (1989).

Another possible approach to the regulation of drones is to apply laws dealing with invasion of privacy. As stated in the Restatement (Second) of Torts § 159(2):

Flight by aircraft in the air space above the land of another is a trespass if, but only if, (a) it enters into the immediate reaches of the air space next to the land, and (b) it interferes substantially with the other’s use and enjoyment of his land. Restatement (Second) of Torts § 159(2).

However, this approach is very subjective and burdensome, requiring courts to weigh the competing interests of the parties on a case-by-case basis. More is needed.

In addition to FAA rules, state and local governments are scrambling to pass ordinances governing the use of drones. While the FAA has exclusive jurisdiction to regulate the airspace above 500 feet, states can also regulate the airspace at lower altitudes. At least 43 states have pending legislation to regulate drone use. The FAA is primarily concerned with flight safety. However, states are more concerned with privacy and nuisance issues. Alabama, for example, recently passed a law which prohibits the use of a drone to harass a hunter or fisherman. California passed a law last year prohibiting anybody using a drone from taking a picture of a person who has an “expectation of privacy.” Colorado prohibits drones from being used in aiding hunters. Montana law limits the admissibility of evidence in a civil or criminal legal proceeding if it was obtained using UAVs. Tennessee makes it a misdemeanor to use drone-captured video footage of a hunter or angler without their consent. Texas passed an omnibus bill that identifies 19 lawful uses for drones – a bill criticized by some as opening the door for police abuse. In Wisconsin, weaponizing a drone is a felony and law enforcement must obtain a warrant before using one to collect evidence.

Drone use may seem like something out of a science fiction movie, but they are here to stay, together with all of the interesting legal issues and conundrums they create. With the onset of commercial drone use, over-regulating their private use may solve some problems while at the same time tying the hands of businesses, small and large, and stifling innovation. One thing remains certain. With the proliferation of drone use U.S. courts and legislators will be faced with the same challenges regarding low-altitude airspace as they were faced with regarding high-altitude airspace when modern aviation was born.

http://www.claimsjournal.com/news/national/2015/07/02/264216.htm

 

 

Mabus and McCain Actually Agree … DoD is Broken

Robert Kozloski    

http://warontherocks.com/2015/07/mabus-and-mccain-actually-agree-dod-is-broken/?singlepage=1

July 14, 2015

 

 

Since becoming secretary of the Navy, Ray Mabus and Sen. John McCain haven’t always seen eye-to-eye on important naval issues. They certainly have differing views on energy, personnel, and shipbuilding policy. But there may be an important topic where Ray Mabus and John McCain actually agree on something — the reality that the Department of Defense is broken.

McCain is leading the charge to end longstanding policies that create unnecessary overhead and limit the effectiveness of the Department of Defense — particularly with his call to review Goldwater-Nichols and the defense acquisition system.

Similarly, Mabus pointed out recently that the so-called “Fourth Estate” within the Department of Defense — the Office of the Secretary of Defense; the Joint Staff; and defense agencies such as Defense Finance and Accounting Services, Defense Logistics Agency, and 20 other organizations — has swelled in size over the past two decades. He was very critical of how these support organizations actually supported the Department of the Navy.

These organizations were created to provide common services such as information technology, logistics, and intelligence to the military services, but over time the military services have had to modify their practices to support the defense agencies. The Fourth Estate’s benefit to the military services and our elected leaders is questionable at best. What is without question is that given the cost to maintain such overhead, these offices certainly draw scarce taxpayer dollars away from our nation’s military needs.

Their criticisms are not an indictment of the men and women currently working in or leading these organizations, but rather of a system that most of them realize is flawed by design. Throughout recent history, Congress has seen creating more bureaucracy as the best tool to fix national security problems. The Pentagon’s current problems are a direct product of that drive.

After World War II, Congress attempted to coordinate the Army and the Navy’s efforts by creating the position of the Secretary of Defense — with a handful of people, really — in what now has become the Department of Defense. The Fourth Estate grew, especially after 1958, largely to manage resources more tightly. After a series of military misfortunes in the 1970s and 1980s, Congress re-fixed the same problem by growing the military bureaucracy of joint organizations through the Goldwater-Nichols Department of Defense Reorganization Act of 1986.

Similarly, after September 11 Congress attempted to reform the intelligence community by adding the oversight layer of Director of National Intelligence. The same approach was used to fix domestic security concerns by creating the Department of Homeland Security, again positioning layers atop existing layers. But as such agencies mature, they stray from original missions, they assume new roles, and their staffs grow while the cancer of bureaucratic accretion takes hold.

Within the Department of Defense, such bureaucratic layering chokes the life out of innovation and the ability to prepare for future threats. For example, after decades of trying to reform how the Pentagon designs and purchases new weapon systems, the process is no better than when these reforms began. Checking homework before, during, and after it is done never improves students’ learning — it just requires ever more checking to be done.

An internal Army study recently showed it takes over 10 years to navigate the paperwork and reviews to produce exactly nothing — a decade worth of administration for every weapon system. Responding to the onerous oversight and coordinating documentation has become the critical path for acquisition programs. Imagine what cost these dubious reviews and tenuous paperwork drills add to every ship or plane the Pentagon tries to provide to our operating forces.

The Fourth Estate’s approach to operations is to consolidate functions within the services and attempt to find one-size-fits-all solutions to military problems. These efforts come at the detriment of the military services. Defense organizations also arose to provide information systems, financial services, and logistics to the military services. However, these organizations attempt to do business with a uniform, standardized approach, to the detriment of the four military services. While defense-wide consolidation appears to be good idea theoretically, in practice the outcomes are almost always a disaster and a waste all their own. Further, the consolidation of massive military functions and systems greatly increases catastrophic risk, as the recent Office of Personnel Management data breaches clearly indicate.

To illustrate how complex and costly a Department of Defense-wide “good idea” is, one needs to look no further than the failed attempt to create a single personnel system for the department: the Defense Integrated Military Human Resources System. After 12 years of effort, and spending more on it than the price of two Littoral Combat Ships, Secretary Gates canceled the program and the only thing yielded by this effort was a bad acronym. Worst of all, it doesn’t appear the Pentagon has learned anything from these failed efforts.

The quest for uniformity and standardization across an enterprise the size of the Department of Defense is a terrible business practice and violates common sense, but unfortunately is often attempted to achieve Pentagon efficiency. Rather, we should be taking a decentralized approach to executing Title 10 missions and focus on working together when and where it makes sense. For 60 years, the Pentagon and Congress have made efforts to create a single unified defense organization — a bad idea in 1947, 1958, and 1986, and remains so today.

Every service in the U.S. military has its own unique institutional culture based on common history and missions. Such organizational diversity should be viewed as the strength lent by each service and, therefore, worth preserving. Creating organizations and processes which force the merging of disparate programs is an expensive fool’s errand. The Navy and Marine Corps have been in the same department for over 200 years and they are still trying to perfect naval integration. To think four services can fully integrate to support the shared lie of “jointness” is absurd.

Finally, every U.S. president from Eisenhower to Obama has complained about the advice given to him by senior military officers. Goldwater-Nichols was to solve this by offering a single military voice to the president. Unfortunately, the single “joint” voice perpetuates groupthink or “the least common denominator” approach to decision making, adding the price of an expensive staff which uses extensive (meaning slow) staffing.

This approach may have been acceptable during the Cold War when dealing with a single enemy. But given the complexity and uncertainty of the future security environment, such an approach surely will not serve future presidents well.

Today’s sailors and marines are forward deployed to global hot spots 365 days a year to assure our allies and deter aggression. In the unfortunate event that deterrence fails and a major war emerges against a modern adversary, our decision-making cycle will need to be fast and right. Officers unfamiliar with naval forces cannot make the rapid decisions needed in this new environment. Wars in the future will not be fought like World War II or the Cold War, for which our current organization was designed.

No one in the Pentagon understands the people, platforms, and operational capabilities of the naval services better than the Commandant of the Marine Corps and Chief of Naval Operations. It makes absolutely no sense to have an Army paratrooper officer involved in making decisions for a Navy anti-ballistic missile system, or a submariner choosing Army tank designs. Yet that’s the system the Joint Staff and Congress have put in place today. Any legislative change that returns more control to the Service Chiefs must be fully supported.

 

To make these changes real, Congress and the Department of Defense must have a candid conversation about what’s broken in the U.S. military and put aside politics to find solutions. This is not just about saving taxpayer dollars or saving defense programs popular in congressional districts, although it will cut overhead; this is about national security. The Pentagon’s own ineffective bureaucracy may become its Achilles heel in future conflicts; adversaries will surely have a faster decision cycle than we have today.

Despite their policy differences, Secretary Mabus and Sen. McCain, both former naval officers, are unafraid to tackle difficult issues and are undeterred by bureaucratic resistance. Their critiques of our institutional weaknesses are on the mark. While the task of trimming the briar patch of bureaucracy within the Department of Defense is a daunting one, I’m sure they are up to the challenge.

 

Air Force Will Offer Bonuses To Lure Drone Pilots

Those finishing initial commitments could get $15,000 every year for either 5 or 9 years


http://www.wsj.com/articles/air-force-will-offer-bonuses-to-lure-drone-pilots-1436922312?mod=rss_Technology

By Gordon Lubold

Updated July 14, 2015 9:16 p.m. ET

 

WASHINGTON—The Air Force is taking steps to address a chronic shortage of drone pilots, sweetening the allure of flying the unmanned planes as part of a plan to alleviate the strains as it tries to meet demands for drones and the video intelligence they provide.

Secretary of the Air Force Deborah Lee James is expected to announce a plan Wednesday to give Air Force pilots thousands of dollars in bonus pay if they sign up to fly the remotely piloted craft for five years or more. Ms. James also is directing that for the next year, some Air Force pilots graduating from flight school automatically be assigned to drone duty to bolster its ranks.

The Air Force plan in addition includes a pledge to spend more than $100 million to buy more equipment to help increase the service’s capacity to use drones to provide video surveillance.

A range of world-wide security crises, including Iraq, Syria and Afghanistan, and security situations in places like Yemen, North Korea and China, have resulted in a high demand among military commanders for the kind of intelligence, surveillance and reconnaissance—or ISR—that only drones, some of which have strike capabilities, provide.

But the Air Force, which flies Predator, Reaper and Global Hawk drones, has struggled to keep up with that demand largely due to the service’s inability to identify, train and retain enough drone pilots. The service trains about 180 such pilots a year, but loses about 230.

As a result, the pilots complain of being overworked and overstressed. On average, drone pilots fly up to 900 hours a year, compared with fighter pilots, who are in the cockpit an average of 250 hours a year, according to Air Force officials.

Ms. James will announce a program whereby drone pilots finishing their initial commitments could choose to extend for either five years or nine years. Under both plans, they would receive a retention bonus of $15,000 every year, with the option to receive half the total bonus up front.

The service this year also will automatically assign 80 graduates from the Air Force’s flight schools directly into drone duty. Up until now, all pilots finishing training chose among any a number of conventional aircraft, including C-17 cargo planes or F-16 fighters.

 

“In a complex global environment, [remotely piloted aircraft] pilots will always be in demand,” Ms. James said in a statement. “We now face a situation where if we don’t direct additional resources appropriately, it creates unacceptable risk.”

The service trains about 1,000 active-duty pilots a year, with additional pilots schooled for drone duty. Under the plan, the Air Force expects to train and retain about 300 drone pilots a year and return the drone roster to a more robust state by 2017.

Recognizing the strained state of the drone ranks, Defense Secretary Ash Carter directed that the number of daily flights drones fly each year be decreased from 65 a day to 60 by October.

Currently, there are about 61 drone flights world-wide each day. That drop has helped the Air Force to “catch its breath” as one defense official said, and focus on preparing the drone force for the future.

Despite the struggle to recruit and retain enough drone pilots, the Air Force has been reluctant to make a major change that some believe would help alleviate the stress on the pilot force: opening up the job to enlisted personnel.

Now, pilots must be Air Force officers.

Expanding the pool would create another career path within the enlisted ranks, some note. But it also could undermine the Air Force’s efforts to change the narrative about drone operators, who can be perceived, even within the Air Force, as doing a lesser job when compared with pilots who fly conventional aircraft.

Moreover, Air Force officials maintain that opening up the drone jobs to enlisted personnel wouldn’t be easy and wouldn’t necessarily improve the service’s ability to field more pilots.

“These kids are not playing videogames out of their mothers’ basements,” Col. James Cluff, who commands Creech base in Nevada, has said.

Write to Gordon Lubold at Gordon.Lubold@wsj.com

 

Drone maker 3D Robotics sees the future, and it is apps

For Road Trip 2015, we travel to the outskirts of San Diego to check in with a company trying to democratize drone software for the world, a la smartphones and app stores.

http://www.cnet.com/news/3d-robotics-cnet-road-trip-2015/?tag=nl.e703&s_cid=e703&ttag=e703&ftag=CAD090e536

by Nick Statt

July 16, 2015 5:00 AM PDT

 

SAN DIEGO — What separates a drone from a smartphone? Well, other than the fact that your iPhone can’t fly (yet), drones don’t have an equivalent app store.

At least not for the moment, says Jordi Munoz, co-founder and president of the largest US commercial drone operation, 3D Robotics.

“The smartphone was a product that was intended for a consumer market, and now it went all the way to industrial applications and even the medical industry,” Munoz says. “The same is happening with drones.”

We’re sitting on the second floor of 3DR’s San Diego office where, from the window facing south, you can see Tijuana. The Mexican city, less than 10 miles away, is where Munoz grew up. It’s also where 3DR’s first manufacturing facility sits.

Munoz was just 20 years old when he got involved in the drone market eight years ago. A mostly self-taught programmer, he learned from the Internet and earned what he calls a “Google Ph.D.”

Drones, he believes, are a market that’s still primed for growth.

3DR started in 2009 selling Lego drone kits in pizza boxes; the first run of 40 sold out in 10 minutes. Today, 3DR sells seven models ranging in price from $550 do-it-yourself kits to $5,400 professional-grade devices. The company has more than 350 employees and is on track to rack up $40 million in sales this year.

Drones will truly take off, he says, when people figure out how best to use them beyond photography and for the simple fun of flying. What that means is that, as with smartphones, developers need to figure out how to build software that works on any drone — similar to apps like Facebook, which are available on every device.

In that respect, 3DR is aiming to make tools that will be used to build apps for any drone.

 

Inside the drone nest

3DR is now headquartered out of Berkeley, California, where Chris Anderson, the former editor in chief of Wired magazine, oversees business operations as the company’s CEO.

Anderson left his post at Wired in 2012 to partner full-time with Munoz, whom he met online.

Anderson understood that his partner was an impressive tinkerer with a big-picture view of the potential future of drones, but Munoz had no idea he was chatting with the head of a popular tech publication in 2007. Munoz was simply attracted to Anderson’s community website, DIY Drones, which the journalist had set up that year to foster a hobbyist community and, eventually, to help others build their own drones.

Munoz, who thinks he was around the seventh person to register on the site, used it to show off his prototype projects and share code with fellow enthusiasts, including his groundbreaking autopilot system created from the innards of a Nintendo Wii remote. It became clear to Anderson in 2009, when he wanted to begin selling more DIY kits over the Internet, that he could turn to Munoz to get it done more efficiently. That partnership led to 3DR’s incorporation in 2009.

As Munoz takes me into a warehouse in San Diego, 3DR employees in black T-shirts tasked with creating made-to-order drones are standing around a gigantic mutant device that’s comprised of more than half a dozen motors and held together with 3D-printed piping. Behind them, a huge square section of the floor is enveloped in black nets and used for testing devices indoors.

The 3DR staffers are eager to show Munoz what they’ve been working on — a drone that will deliver a special clamp to telephone poles for running new wire, like the fiber-optic cables that deliver ultrafast Internet connections. Munoz marvels at the drone’s size and picks it up, feeling the weight.

“It weighs as much as it looks,” he says with a laugh. For most people, something that size looks like it may weigh hundreds of pounds. For Munoz, who understands just how light drones have become in the last 10 years, the drone weighs an appropriate 45 pounds, or more than 10 times the amount of his company’s standard vehicles.

“I didn’t even know they were doing this,” Munoz says with a shrug as he guides me through the rest of the warehouse, which is mostly used as a shipping center for the Tijuana plant.

“I don’t think drones are going to be just limited to delivery and photography.”

3DR, given its DIY roots, is a no-frills startup, at least with respect to its San Diego offices. There is no free food and you won’t find any nap pods or hulking commuter shuttle buses. An upper section of the warehouse floor was to be turned into a kind of game room with hardwood flooring, but it sits empty with a few lonely Ping-Pong tables and an open wall overlooking the warehouse floor. Munoz says they haven’t gotten around to remodeling it.

Munoz resisted leaving San Diego for Berkeley because he wanted to stay close to his family and where he grew up. As Anderson and others take on bigger roles at 3DR, Munoz gets to work on new technologies and features that help drones become more powerful and less costly.

“I don’t think drones are going to be just limited to delivery and photography,” he says. Munoz likens the new and undiscovered ways we’ll use drones to the versatility of different smartphone components. For instance, a smartphone camera was once restricted to snapping shots like a traditional camera. But now, it can now used with sensors like accelerometers and gyroscopes as a measuring device. Or it can help software programs see and map environments.

“It’s amazing how similar they are,” he says.

 

Democratizing drones

The smartphone-to-drone comparison is apt. The term “drone” is shorthand for the kind of unmanned and somewhat autonomous aerial vehicle that now more resembles an alien spacecraft than remote-controlled airplanes of the ’80s and ’90s.

Part of what’s driven the boom in drone sales has been the popularity of smartphones. Nearly every single component inside the modern-day smartphone, from the GPS chip and camera to the battery and the processor, were made cheaper by Apple, Samsung and others racing to make handsets thinner, faster and more powerful every year.

Many drones are now controlled by apps on smartphones as well.

Fifteen years ago, it would have been impossible to cobble together enough computer parts to make a remote-controlled aerial vehicle without spending thousands of dollars, let alone one that could be flown without expertise. Now, the consumer drone market is exploding. Goldman Sachs estimates sales will triple by 2017 from this year’s $1.4 billion. It’s not just hobbyists. Industries from Hollywood and real estate to architecture and ecology are finding ways to utilize high-flying robots with high-definition cameras. The cost is getting cheaper every day.

“Gyroscopes used to cost $5,000 to $10,000,” Munoz said. “And the cheapest one was $300 — and you needed three” to help a drone fly. Today, you can buy a drone with all the gyroscopes and other requisite sensors to keep it stabilized in midair for around $50 from Amazon.

“This is stuff that used to be military industrial technology; you can buy it at RadioShack now,” wrote Anderson in Foreign Policy magazine in 2013. “I’ve never seen technology move faster than it’s moving right now and that’s because of the supercomputer in your pocket.”

And as a result of improvements in the smartphone and tablet apps created to fly these machines, maintaining and upgrading a drone is also becoming easier. That, in turn, is opening the market to new types of customers.

“The magic of computers is they solve that mechanical complexity and transfer it to the software,” Munoz says. The reason drones began taking off in the post-smartphone world was not just that costs were coming down, Munoz adds, but also that what we know as drones today were suddenly “very robust, very easy to repair and easier to fly.”

Compared to a complex rotor-powered model helicopter, the $1,000, 3.3-pound 3DR Solo, which the company released in May, is a mind-blowing engineering feat.

3DR’s Solo, which went on sale in May for around $1,000, weighs only 3.3 pounds and comes equipped with advanced software for automating recording and preventing crashes.

 

3D Robotics

“Humans cannot control four motors” at the same time, Munoz says of the standard quadcopter drone design that has displaced traditional helicopter and airplane designs for consumer aircraft. But thanks to a handful of sensors, cameras and other smartphone tech coupled with stabilizing algorithms, motion detection and GPS, a drone like the Solo can be flown with ease within minutes of unpacking it.

Unlike its biggest competitors, China-based DJI and French drone maker Parrot, 3DR offers a majority of its software components as open source, so anyone can download and use the code and modify it. The company also provides tools and mobile apps to develop your own drone software and to better understand and utilize all the data drones produce mid-flight.

Of course, 3DR has a long way to go before it can compete on the scale of DJI. That company, which was founded in 2006 and has nearly 3,000 employees and offices around the world, controls around 70 percent of the global drone market, according to Goldman Sachs.

DJI is also on track to pull in $1 billion in sales this year. That’s thanks to its popular Phantom line of easy-to-use drones, which are known for their sleek white, Apple-like look and for being the primary device of choice for drunk pilots who want to fly the device dangerously close to the White House.

3DR wants to continue fostering the open source software community from which it was born in the hopes it will create the drone and app platform everyone begins using, from rookies to the most advanced, software-literate pilots.

“Maybe in a year or two years, we’ll release a drone where it doesn’t matter how stupid you are, how drunk you are,” Munoz adds. “You won’t be able to crash it.”

 

Creating cybersecurity that thinks

http://www.computerworld.com/article/2881551/creating-cyber-security-that-thinks.html

By David Lopes Pegna

Computerworld | Feb 20, 2015 12:25 PM PT

 

Until recently, using the terms “data science” and “cybersecurity” in the same sentence would have seemed odd. Cybersecurity solutions have traditionally been based on signatures – relying on matches to patterns identified with previously identified malware to capture attacks in real time. In this context, the use of advanced analytical techniques, big data and all the traditional components that have become representative of “data science” have not been at the center of cybersecurity solutions focused on identification and prevention of cyber attacks.

This is not surprising. In a signature-based solution, any given malware or new flavor of it needs to be identified, sometimes reverse-engineered and have a matching signature deployed in an update of the product in order to be “detectable.” For this reason, signature-based solutions are not able to prevent zero-day attacks and provide very limited benefit compared to the predictive power offered by data science.

Among the many definitions of data science that have emerged in the last few years, “gaining knowledge from data using a scientific approach” best captures some of the different components that characterize it.

In this series of posts, we will investigate how data science can be used to extract knowledge that identifies malware and potential persistent cybersecurity threats.

The unprecedented number of companies that have reported breaches in 2014 are evidence that existing cybersecurity solutions are not effective at identifying malware or detecting attackers inside an organization’s network. The list of companies that have reported breaches and exfiltration of sensitive data grows at an alarming rate: from the large volume data breaches at Target and Home Depot earlier in 2014, to the recent breaches at Sony Entertainment, JP Morgan and the most recent attack at Anthem in February, where personally identifiable Information (PII) for 80 million Americans was stolen. Breaches involve big and small companies, showing that the time has come for a different approach to the identification and prevention of malware and malicious network activity.

Three technological advances enable data science to deliver new innovative cybersecurity solutions:

•Storage – the ease of collecting and storing large amount of data on which analytics techniques can be applied (distributed systems as cluster deployments).

•Computing – the prompt availability of large computing power allows easy use of sophisticated machine learning techniques to build models for malware identification.

•Behavior – the fundamental transition from identifying malware with signatures to identifying the particular behaviors an infected computer will exhibit.

Let’s discuss more in depth how each of the items above can be used for a rigorous application of data science techniques to solve today’s cybersecurity problems.

Having a large amount of data is of paramount importance in building analytical models that identify cyber attacks. For either a heuristic or refined model based on machine learning, large numbers of data samples need to be analyzed to identify the relevant set of characteristics and aspects that will be part of the model – this is usually referred to as “feature engineering”. Then data needs to be used to cross check and evaluate the performance of the model – this should be thought of as a process of training, cross validation and testing a given “machine learning” approach.

In a separate post, we will discuss in more detail how and why data collection is a crucial part in the data science approach to cybersecurity, and why it presents unique challenges.

One of the reasons for the recent increase in machine learning’s popularity is the prompt availability of large computing resources: Moore’s law holds that the processing power and storage capacity of computer chips double approximately every 24 months.

These advances have enabled the introduction of many off-the-shelf machine learning packages that allow training and testing of machine learning algorithms of increasing complexity on large data samples. These two factors make the use of machine learning practical for use in cybersecurity solutions.

There is a distinction between data science and machine learning, and we will discuss in a dedicated post how machine learning can be used in cybersecurity solutions, and how it fits into the more generic solution of applying data science in malware identification and attack detection.

The fundamental transition from signatures to behavior for malware identification is the most important enabler of applying data science to cybersecurity. Intrusion Prevention System (IPS) and Next-generation Firewall (NGFW) perimeter security solutions inspect network traffic for matches with a signature that has been created in response to analysis of specific malware samples. Minor changes to malware reduce the IPS and NGFW efficacy. However, machines infected with malware can be identified through the observation of their abnormal, post-infection, behavior. Identifying abnormal behavior requires primarily the capability of first identifying what’s normal and the use rigorous analytical methods – data science – to identify anomalies.

We have identified several key aspects that innovative cybersecurity solutions need to have. These require analysis of large data sample and application of advanced analytical methods in order to build data-driven solutions for malware identification and attack detection. A rigorous application of data science techniques is a natural solution to this problem, and represents a dramatic advancement of cybersecurity efficacy.

 

Big data sends cybersecurity back to the future

http://www.computerworld.com/article/2893656/the-future-of-cybersecurity-big-data-and-data-science.html

We are truly in the big data era. Here are the aspects of big data that are important to build the next generation of cybersecurity solutions.

David Lopes Pegna

Computerworld | Mar 12, 2015 3:30 AM PT

 

The main reason behind the rising popularity of data science is the incredible amount of digital data that gets stored and processed daily. Usually, this abundant data is referred to as “big data” and it’s no surprise that data science and big data are often paired in the same discussion and used almost synonymously. While the two are related, the existence of big data prompted the need for a more scientific approach – data science – to the consumption and analysis of this incredible wealth of data.

In order for cybersecurity professionals to see the greatest possibilities offered by big data and data science it would be ideal to go Back to the Future to see how data insights will unfold. Lacking the time-travel expertise of that movie’s Doc Brown, today’s data scientists must imagine the possibilities of how big-data analysis will inform and educate our world.

As I discussed in the first blog of this series, the application of data science techniques to cybersecurity relies on the prompt availability of massive amounts of data on which models can be built and tested to extract interesting insights.

 

How much data is enough?

To give you an idea of how much data needs to be processed, a medium–size network with 20,000 devices (laptops, smartphones and servers) will transmit more than 50 TB of data in a 24–hour period. That means that over 5 Gbits must be analyzed every second to detect cyberattacks, potential threats and malware attributed to malicious hackers! We can now understand Doc Brown’s amazement when he shouted “1.21 gigawatts!” in Back to the Future.

While dealing with such volumes of data in real time poses difficult challenges, we should also remember that analyzing large volumes of data is necessary to create data–science models that can detect cyberattacks while both minimizing false positives (false alarms) and false negatives (failing to detect real threats).

 

The three V’s of context

When discussing big data, the three big “V’s” are often mentioned: Volume, Variety and Velocity. Let’s see what these really mean in a cybersecurity context.

1.Volume: large quantities of data are necessary to build robust models and properly test them. When is “large” large enough? The quote below by statistician Andrew Gelman from a 2005 blog entry is very relevant.

“Sample sizes are never large. If N (i.e. the sample size) is too small to get a sufficiently precise estimate, you need to get more data (or make more assumptions). But once N is “large enough,” you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). N is never enough because if it were “enough” you’d already be on to the next problem for which you need more data.”

If a data scientist is relying on machine learning to build a model, large data samples are necessary to understand and extract new features, and properly estimate the performance of the model before deploying it in production environments. Also, when a given model is based on simple rules or heuristic findings, it is of paramount importance to test it out on large data samples to assess performance and the possible rate of false positives. When the data sample is “large” enough and, as I will discuss in the second point, has enough “variability”, the data scientist can try to identify different ways of categorizing the data and unexpected properties of the data may become evident.

2.Variety: in big data discussions, this term usually refers to the number of types of data available. From the point of view of data organization, this refers to structured data (e.g., data that follows a precise schema) versus unstructured data (e.g., log records or data that involves a lot of text). The latter sometimes doesn’t follow a precise schema and, while this poses some challenges, unstructured data often provide a richness of content that can be beneficial when building a data science model.

For cybersecurity data science models, “Variability” really matters more than “Variety.” Variability refers to the range of values that a given feature could take in a data set.

The importance of having data with enough variability in building cybersecurity models cannot be stressed enough, and it’s often underestimated. Network deployments in organizations – businesses, government agencies and private institutions – vary greatly. Commercial network applications are used differently across organizations and custom applications are developed for specific purposes. If the data sample on which a given model is tested lacks variability, the risk of an incorrect assessment of the model’s performance is high. If a given machine learning model has been built properly (e.g., without “overtraining”, which happens when the model picks up very specific properties of the data on which it has been trained), it should be able to generalize to “unseen” data. However, if the original data set lacks in variability, the chance of improper modeling (for example, misclassification of a given data sample) is higher.

3.Velocity: the amount of digital information increases more than tenfold every five years according to a The Economist article “Data, data everywhere”. As I noted in the first post of this series, the analysis of large data samples is possible thanks to the nearly ubiquitous availability of low–cost compute and storage resources. If a data scientist has to analyze hundreds of million of records and every single query to the data set requires hours, building and testing models would be a cumbersome and tedious process. Being able to quickly iterate through the data, modify some parameters in a particular model and quickly assess its performance are all crucial aspects of the successful application of data science techniques to cybersecurity.

Volume, Variety, and Velocity (as well as Variability) are all essential characteristics of big data that have high relevance for applying data science to cybersecurity. More recent discussions on big data have also started to emphasize the concept of the “Value” of data.

In the next post in this series I will start to discuss how machine learning can be applied to cybersecurity and the value of your network’s data.

 

 

Cybersecurity, data science and machine learning: Is all data equal?

http://www.computerworld.com/article/2908507/cybersecurity-data-science-and-machine-learning-is-all-data-equal.html

Computerworld | Apr 16, 2015 10:06 AM PT

By David Lopes Pegna

 

To apply machine learning to cybersecurity data, it’s important to understand the different value of the data that will be used to build machine learning models.

In big-data discussions, the value of data sometimes refers to the predictive capability of a given data model and other times to the discovery of hidden insights that appear when rigorous analytical methods are applied to the data itself. From a cybersecurity point of view, I believe the value of data refers first to the “nature” of the data itself. Positive data, i.e. malicious network traffic data from malware and cyberattacks, have much more value than some other data science problems. To better understand this, let’s start to discuss how a wealth of network traffic data can be used to build network security models through the use of machine learning techniques.

Machine learning, together with data science and big data, is gaining a lot of popularity due to its widespread use in many tech companies around the world. The applications of machine learning range from recommendation systems (e.g., Netflix, Amazon) to spam filtering by popular Web-based email providers to image and voice recognition and many other applications.

From a cybersecurity perspective, data models need to have predictive power to automatically distinguish between normal benign network traffic and abnormal, potentially malicious traffic that can be an indicator of an active cyberattack or malware infection. Machine learning can be used to build classifiers, as the goal of the models is to provide a binary response (e.g., good or bad) to the network traffic being analyzed. This is similar to the problem that spam filters need to address, since they are built to identify normal emails from ads, phishing, Trojan horses and other types of spam.

 

Classifiers: Separating normal from malicious

In order to build a classifier, large amounts of data are required. This data will be used to train a machine-learning algorithm and evaluate the classifier’s performance. Data falls into two categories: positive and negative samples. In the case of the spam classifier example, “positive” data refers to data showing the behavior that the classifier needs to be able to detect: real spam email. In the case of a network security model, “positive” data refers to traffic showing the behavior of real cyberattacks and malware infections. “Negative” data refers to “normal” data. In the case of the spam classifier, “negative” data are legitimate emails; for a network security model, it is normal network traffic data.

From what I have discussed so far, it would seem the two problems described above are reasonably similar. We want a classifier to detect spam and only keep good emails, and we want a network security model that detects cyberattacks and malware infections without incorrectly judging benign network traffic to be harmful.

What is intrinsically different between these two problems is the prompt availability of positive data. Positive data for spam emails are abundant and readily available for building a classifier. Despite the increase of cyberattacks reports in the news that have affected organizations across a broad set of industries, positive data from real cyberattacks and malware infections are not easily accessible. And this is particularly true for “targeted” attacks where the attack is highly customized for a particular target.

While there are libraries of malware samples (just to name a couple of examples, Deep End Research and McAfee), hackers quickly modify their techniques, and attacks are increasingly sophisticated, making these libraries quickly obsolete. This is not surprising, as the “targeted” malware is often custom-built for large-scale monetization via targeted attacks, where data is stolen or destroyed, and it is designed to remain stealthy for as long as possible. This applies to many of the data breaches being reported in the last 18 months. Even for attacks that may seem similar in their goals (e.g., Target Corp, Neiman Marcus, Home Depot), the tactics of the attacks were always adapted to the particular victim as was pointed out by The Washington Post in a recent article on the Anthem breach.

 

The value of the positive samples

It seems evident that positive samples used to build machine-learning models have an intrinsically high value and are of the utmost importance to guarantee that the predictive power of the model will be generalized enough to identify new cyberattack and malware flavors. This condition is necessary but not sufficient, as the choice of features that are utilized to build the model also has an extremely high impact on the model performance, as I will discuss in a future blog. In fact, it would not make sense to try to collect extremely large amounts of positive samples before testing a given machine learning model, as feature selection and proper training techniques are also very important aspects in machine learning.

It should also be clear that, no matter how many positive samples are available, the training data for the machine-learning model will be highly unbalanced, as the amount of negative samples (e.g., benign network traffic) will always be many orders-of-magnitude more abundant than the positive (e.g., cyberattack, malware infection) data samples. The typical example that is presented in this context is the one where in a classification problem one has 99% data corresponding to one class (e.g. benign traffic data), then one can achieve 99% accuracy just by labeling everything as benign! This is a well-known problem and can be resolved through a proper choice of the evaluation metric, proper training dataset balancing and the use of sophisticated sampling methods. The application of these techniques also allows you to determine if the right quantity of positive samples is available, or if more data is necessary.

The collection of positive samples is therefore one of the first and most important tasks that enables the use of machine-learning algorithms to build cybersecurity models; this sample collection process sometimes can be lengthy. For example, it may be necessary to run a sample of malware on a dedicated sandbox and collect output from several different sources in order to extract the relevant features connected to that malware sample. This process could take several hours for just one sample.

This completes my focus on the main aspects of big data and its relevance to cybersecurity by discussing the value of data that I started in the previous post in this series, Big data sends cybersecurity back to the future. In the next post, I will discuss which issues should be considered in choosing the right set of features before training a given machine-learning model in the context of cybersecurity data.

 

 

Cybersecurity and machine learning: The right features can lead to success

http://www.computerworld.com/article/2947617/data-analytics/cybersecurity-and-machine-learning-how-selecting-the-right-features-can-lead-to-success.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+computerworld%2Fs%2Ffeed%2Ftopic%2F9+%28Computerworld+BI+and+Analytics+News%29&phint=newt%3Dcomputerworld_enterprise_apps&phint=idg_eid%3De90c48113f499eb54a0bcb4cc922666d#tk.rss_dataanalytics

David Lopes Pegna

Computerworld | Jul 14, 2015 6:28 AM PT

 

Big data is around us. However, it is common to hear from a lot of data scientists and researchers doing analytics that they need more data. How is that possible, and where does this eagerness to get more data come from?

Very often, data scientists need lots of data to train sophisticated machine-learning models. The same applies when using machine-learning algorithms for cybersecurity. Lots of data is needed in order to build classifiers that identify, among many different targets, malicious behavior and malware infections. In this context, the eagerness to get vast amounts of data comes from the need to have enough positive samples — such as data from real threats and malware infections — that can be used to train machine-learning classifiers.

 

Is the need for large amounts of data really justified? It depends on the problem that machine learning is trying to solve. But exactly how much data is needed to train a machine-learning model should always be associated with the choice of features that are used.

Features are the set of information that’s provided to characterize a given data sample. Sometimes the number of features available is not directly under control because it comes from sophisticated data pipelines that can’t be easily modified. In other cases, it’s relatively easy to access new features from existing data samples, or properly pre-process data to build new and more interesting features. This process is sometimes known as “feature engineering.”

Machine-learning books will emphasize the importance of accurately choosing the right features to train a machine-learning algorithm. This is an important consideration, because an endless amount of training data, if paired to the wrong set of features, will not produce a reliable model.

This is especially true when feature choices for a machine-learning algorithm are applied to network traffic data to identify cybersecurity threats. For some models, knowing which protocol the traffic is using — such as TCP or UDP — could be relevant, although it might be a useless feature for other cases.

Applying natural language processing (NLP) techniques for feature extraction could be the right choice for models that involve HTTP data, such as parsing the URL field. However, it might not be relevant for models that look primarily at aggregate information about network traffic flows like client/server communications.

In general, the number of features available is connected to the ability of parsing a given network protocol. This is because, in its absence, the amount of available information that can be extracted from raw network traffic is fairly limited.

The discussion above could create the wrong impression that using an extremely large set of features would solve any machine-learning problems.

Indeed, many off-the shelf machine-learning libraries provide easy-to-access methods to assess the importance of different features used to train some algorithms. Such tools try to automate the process of properly choosing the right features, but should not eliminate a careful inspection of the features being tested.

The quality of the features selected to solve a machine-learning problem is much more important than the number of features utilized. This important point can be seen as a very simple expression of the famous curse of dimensionality (R. Bellman, Adaptive Control Processes: A Guided Tour, Princeton University Press, Princeton, N.J., 1961).

A lot has been written about this topic as well as several different definitions. One reasonably accurate statement that’s a bit cryptic is that as the feature dimensionality increases, the volume of the space increases so fast that the available data becomes sparse.

A different way to explain this is that as feature dimensionality increases, distance among different samples in the feature-space quickly converges to the same value.

This is fairly intuitive, because the sparsity of the data will push the different data samples to corners of the feature space that are asymptotically equidistant. Click here for a visual representation of this phenomenon.

As many machine learning algorithms rely on one form or another for a definition of distance (e.g., Euclidean), these algorithms quickly lose predictive power as the distance definition become meaningless.

For a fixed amount of training data, an increasing number of features will lead to overfitting problems. For example, classifiers that have extremely good performance on the training dataset might have very poor predictive power on unseen data.

One possible solution in this case is to increase the size of the training dataset. But as we pointed out above for network traffic classifiers, this is sometimes not possible or very expensive and time-consuming.

A potentially useful approach involves the proper selection of available features, identifying relationships among them, and using techniques like principal component analysis (PCA) to help reduce the feature dimensionality. But the new “reduced” feature sets run the risk of being less intuitive than the original ones.

As we discussed in a previous blog, limiting the amount of positive samples is critical to successfully training a cybersecurity machine-learning model. The proper choice of the features is equally important and plays a vital role in building classifiers that have a high degree of generalization and work successfully for data that is never seen in cross-validation samples.

 

 

Congress balks at Obama’s UN move on Iran deal

By Burgess Everett and Lauren French

| 7/16/15 6:24 PM EDT


http://www.politico.com/story/2015/07/congress-responds-to-obamas-un-move-on-iran-deal-120257.html

 

President Barack Obama has a new hurdle to selling his Iran deal on Capitol Hill: Bipartisan opposition to his decision to submit the nuclear accord to the United Nations before Congress votes on the agreement.

Sens. Bob Corker (R-Tenn.) and Ben Cardin (D-Md.) said on Thursday afternoon that they disagreed with the U.S. pushing the agreement through the UN before Congress votes this September to approve or reject it, a troubling development for an administration still trying to win over both men.

Cardin, the top Democrat on the committee, questioned Vice President Joe Biden about the matter during a closed door meeting with committee Democrats on Thursday. He said Biden responded with an explanation of the “differences between the executive and legislative branches.” That didn’t satisfy Cardin, who said Obama should put the brakes on UN consideration until Congress has 60 days to review the bill, a period that technically hasn’t even started yet because the agreement has not been formally submitted to Capitol Hill.

“There was nothing to be lost by waiting until after the review period was over,” Cardin said in an interview. “It could be inconsistent [with how Congress votes] and therefore it would have been better if that had been deferred until after the 60-day period.”

Senate Health, Education, Labor and Pensions Committee Chairman Sen. Lamar Alexander, R-Tenn. listens to testimony on Capitol Hill in Washington, Wednesday, Jan. 21, 2015, during the committee’s hearing looking at ways to fix the No Child Left Behind law. Alexander said he is open to discussion on whether the federal government should dictate standardized testing or leave it up to states. (AP Photo/Susan Walsh)

Corker called Obama’s move an “affront to the American people.” He chastised U.S. Ambassador to the United Nations Samantha Power by telephone on Thursday morning and said because Congress has not yet voted on lifting sanctions that are crucial to the deal, the UN is moving forward on an international agreement they may not be able to implement.

“I question the judgment of our president,” Corker told reporters, fuming into the microphones as Biden escaped a press scrum down a narrow flight of stairs. “This is exactly what we were trying to stop. We wanted the American people to understand this agreement before it went in place.”

White House spokesman Eric Schultz said the UN process “does not lessen the importance of Congress or its review.”

“We will not begin implementation of the plan until after the Congressional review period is over,” Schultz said.

Despite frustrations among key lawmakers, the White House was making steady inroads with skeptical Democrats as officials blanketed lawmakers with briefings by Biden, top security officials and personal phone calls. Both the House and the Senate will get all-members briefings next week.

Senate Majority Leader Mitch McConnell of Ky., joined by from left, Sen. Lamar Alexander, R-Tenn., Sen. John Barrasso, R-Wy., Sen. John Thune, R-S.D., and Sen. John Cornyn, R-Texas., speaks to media after a policy luncheon on Capitol Hill in Washington, Wednesday, July 8, 2015. (AP Photo/Carolyn Kaster)

In nearly a dozen interviews with lawmakers exiting this week’s briefings, Democrats seemed reassured by answers they were getting, suggesting the administration can build the support it needs to sustain a veto of any GOP legislation that would scuttle the deal.

“He answered a whole series of difficult and demanding questions and provided encouraging and thoughtful responses,” said Sen. Chris Coons (D-Del.) of Biden.

Meanwhile, Sen. Ted Cruz of Texas, a conservative aspirant for the GOP presidential nomination, announced his intent to delay all State Department nominees and legislation to authorize the agency until Obama tells Cruz that he will block a UN vote.

“It seems your administration intended all along to circumvent this domestic review,” Cruz wrote in a letter to the president. “That Samantha Power has already introduced a draft resolution to the Security Council portrays an offensive level of disrespect for the American people and their elected representatives in Congress.”

It’s unclear how widespread the ramifications of the administration’s submission to the U.N. will be. But it doesn’t appear to be doing the administration any favors with Cardin, a key swing Democrat that the administration is likely to need on its side, or Corker, the undecided chairman who will lead an aggressive hearing schedule over the next two weeks.

But the popular congressional review law crafted by Cardin and Corker includes no provisions that punish the administration for submitting the deal to the United Nations before Congress votes, leading Republicans like House Majority Leader Kevin McCarthy of California to accuse Obama of violating the “spirit” of the law rather than the law itself.

other lawmakers shrugged off the dispute. Senate Majority Whip John Cornyn (R-Texas) called it “immaterial” to lawmakers’ role in deciding whether or not to lift congressional sanctions, and Sen. Tim Kaine (D-Va.) said it was wholly consistent with the long-debated nuclear review law that states the UN and administration can lift “sanctions that Congress didn’t have anything to do with.”

“You could certainly argue with the tactic, but it was very plain,” Kaine said.

Alongside aide John Podesta, former Sec. of State Hillary Clinton walks through the Senate tunnels July 14, 2015 at the US Capitol in Washington, DC. (M. Scott Mahaskey/Politico)

Biden huddled with committee Democrats for more than an hour, giving each of nine Foreign Relations Democrats a chance to ask the former chairman questions about nuclear inspections and enforcement. Biden reassured fellow Delawarean Coons on lifting the arms embargo on Iran, explaining there are “alternative ways for us to prevent the Iranians from engaging in the sale of conventional arms in the region.”

Shortly before the Biden briefing on Thursday morning more than a dozen Jewish Democratic lawmakers huddled with Deputy National Security Adviser Ben Rhodes and Jeffery Prescott, a senior director with the National Security Council. Attendees said the White House stayed away from making sales pitches about Obama’s legacy or securing a win for the White House with a sustained veto.

“There was zero politics,” said New York Rep. Steve Israel, who is skeptical of the Iran deal.

“We wanted to hear about how the money that would be available to Iran once the sanctions were repealed and one of the most important answers to that is that Iran wants sanction relief because their economy is in great trouble,” said Illinois Rep. Jan Schakowsky, a supporter of a deal with Iran. “[They said the] money would be used to help the economy.”

California Rep. Adam Schiff, the top Democrat on the House Intelligence Committee, said Rhodes reiterated that the White House was prepared to use force if Iran violated the terms of the nuclear deal. That’s of paramount concern to hawkish Democrats who have questioned the ability of the U.S. to attack Iran if the regime continues to build up its nuclear capacities or ignores inspection requests from the International Atomic Energy Agency.

“I think it was very helpful. Some came into the meeting ready to support the administration. Others, like myself, are going to continue to reserve judgment. There are a lot of people I want to talk with to help inform my decision,” Schiff said.

That work trying to move the influential bloc of House Democrats followed a Wednesday afternoon White House visit by Democratic Sens. Joe Manchin of West Virginia, Kirsten Gillibrand of New York, Tom Carper of Delaware, Michael Bennet of Colorado and Martin Heinrich of New Mexico. Of particular concern to the White House are incumbent Bennet, Gillibrand, who represents a large Jewish constituency, and Manchin, a fiscal and social conservative who’s dovish on foreign policy.

Manchin sounded inclined to back the deal with “everything that I’m seeing now.” His support would be a major boon for the administration, but he said in an interview that he won’t make a final decision until Kerry, Moniz and Lew give a classified briefing for all senators next week.

“Are we better to move down this path or no path at all?” Manchin asked rhetorically. “I will feel much better next week when we get a secured briefing.”

“The meeting was very useful, I thought the description of the type of transparency and oversight they have in place was reassuring,” Gillibrand said.

Reassuring enough to vote for it? “I’m going to continue to review before I make decision.”

Read more: http://www.politico.com/story/2015/07/congress-responds-to-obamas-un-move-on-iran-deal-120257.html#ixzz3gAQmd18j

 

Rasmussen Reports

What They Told Us: Reviewing Last Week’s Key Polls

Bottom of Form

Saturday, July 18, 2015

Most of the news focus has been on the Republican side of the presidential race, but tonight in Iowa all five announced Democratic candidates will share the same stage for the first time. Does it matter?

Our brand new monthly Hillary Meter shows that former Secretary of State Hillary Clinton remains the overwhelming favorite for the Democratic nomination next year. She may be pulled to the left on some positions by challengers Bernie Sanders and Martin O’Malley, and the meter will be watching to see if the public perceives that as an ideological shift on Clinton’s part. 

It will be interesting to hear what the Democratic hopefuls have to say about the Obama administration’s just-concluded deal with Iran. The agreement which hopefully puts the brakes on the Iranian nuclear weapons development program is being criticized by Republicans – and Democrats – in Congress, and voters believe more strongly than ever that the president needs Congress’ okay before moving ahead on the deal with Iran.

Wisconsin Governor Scott Walker has long been considered one of the more formidable contenders for next year’s Republican presidential nomination, but do GOP voters agree now that he’s formally entered the race?

Former Florida Governor Jeb Bush, the current GOP front-runner according to our polling, caused a stir on the campaign trail recently when he said Americans need to work harder to get the U.S. economy back on its feet. But most voters feel strongly instead that the government and special interests have gamed the economy to deny Americans what they are due.

Voters have said in surveys for years that big business and big government generally work together against the interests of investors and consumers.  They’ve also long felt that the federal government has become a special interest of its own.

How do Americans rate their workload anyway compared to those in most other countries?

Bush also made news recently by releasing 33 years of tax returns. Voters think all the candidates for the White House need to make their tax returns public.

Here’s a look at how all the announced presidential candidates stack up so far.

Congress is tied up again over the direction of the next federal budget, but voters aren’t holding their breath waiting for spending cuts

Most voters favor spending cuts in every program of the federal government, although that support lessens if the defense budget or entitlements are taken off the chopping block.

Puerto Rico is $72 billion in debt and can’t pay its bills
How do voters feel about a federal bailout of Puerto Rico? 

While a city like Detroit can declare bankruptcy, states don’t have that option. Do voters think bankruptcy protection should be extended to states? 

The state budget picture hasn’t improved for most voters, even though they’re much more likely to be paying higher rather than lower taxes these days.

Voters in states run mostly by Democrats are more likely than those in GOP-run states to feel their state government is too big, but all give similar performance reviews to those governments. 

Unlike most states and the federal government, it looks like consumers will be cutting back their spending in several areas next month. 

Thirty-one percent (31%) of voters now think the country is heading in the right direction

The president’s job approval rating remains in the negative mid-teens. 

In other surveys last week:

— U.S. voters aren’t overly concerned that Greece’s financial problems will affect them personally.

— Following what appears to be the largest cyberattack against the U.S. government in history, voters seriously doubt the government can protect their private information and question its performance at protecting secrets.

— When the New York Stock Exchange, the Wall Street Journal and United Airlines all experienced outages due to technical difficulties last week, Americans took notice. Despite an all-clear from the Department of Homeland Security, many are still wondering.

The president recently hosted the head of Vietnam’s Communist Party at the White House in an effort to further strengthen America’s relationship with its former foe, but how do voters here feel about that?

Advertisements

From → Uncategorized

Comments are closed.

%d bloggers like this: