Facebook Notes allows users to include tags. Whenever a tag is used, Facebook crawls the image from the external server and caches it. Facebook will only cache the image once however using random get parameters the cache can be by-passed and the feature can be abused to cause a huge HTTP GET flood.
Back in December, Eloi Vanderbecken of Synacktiv Digital Security was visiting his family for the Christmas holiday, and for various reasons he had the need to gain administrative access to their Linksys WAG200G DSL gateway over Wi-Fi. He discovered that the device was listening on an undocumented Internet Protocol port number, and after analyzing the code in the firmware, he found that the port could be used to send administrative commands to the router without a password.
After Vanderbecken published his results, others confirmed that the same backdoor existed on other systems based on the same Sercomm modem, including home routers from Netgear, Cisco (both under the Cisco and Linksys brands), and Diamond. In January, Netgear and other vendors published a new version of the firmware that was supposed to close the back door.
However, that new firmware apparently only hid the backdoor rather than closing it. In a PowerPoint narrative posted on April 18, Vanderbecken disclosed that the “fixed” code concealed the same communications port he had originally found (port 32764) until a remote user employed a secret “knock”—sending a specially crafted network packet that reactivates the backdoor interface.
The packet structure used to open the backdoor, Vanderbecken said, is the same used by “an old Sercomm update tool”—a packet also used in code by Wilmer van der Gaast to “rootkit” another Netgear router. The packet’s payload, in the version of the backdoor discovered by Vanderbecken in the firmware posted by Netgear, is an MD5 hash of the router’s model number (DGN1000).
The nature of the change, which leverages the same code as was used in the old firmware to provide administrative access over the concealed port, suggests that the backdoor is an intentional feature of the firmware and not just a mistake made in coding.
Do not feed RSA private key information to the random subsystem as entropy. It might be fed to a pluggable random subsystem…. What were they thinking?!
Wow. The entire concept of it is so bad that if you can’t avoid it, it’s literally better to call exit() than go through with it.
“I suspect that over the past eight months, many companies have taken a real hard look at their existing policies about tipping off the U.S. government,” he said. “That’s the price you pay when you’re acting like an out-of-control offensive adversary.”
Meanwhile, the FBI fiercely resists any efforts at Congressional oversight, especially on whistleblower matters. For example, four months ago I sent a letter to the FBI requesting its training materials on the Insider Threat Program. This program was announced by the Obama Administration in October 2011. It was intended to train federal employees to watch out for insider threats among their colleagues. Public news reports indicated that this program might not do enough to distinguish between true insider threats and legitimate whistleblowers. I relayed these concerns in my letter. I also asked for copies of the training materials. I said I wanted to examine whether they adequately distinguished between insider threats and whistleblowers.
In response, an FBI legislative affairs official told my staff that a briefing might be the best way to answer my questions. It was scheduled for last week. Staff for both Chairman Leahy and I attended, and the FBI brought the head of their Insider Threat Program. Yet the FBI didn’t bring the Insider Threat training materials as we had requested. However, the head of the Insider Threat Program told the staff that there was no need to worry about whistleblower communications. He said whistleblowers had to register in order to be protected, and the Insider Threat Program would know to just avoid those people.
Now I have never heard of whistleblowers being required to “register” in order to be protected. The idea of such a requirement should be pretty alarming to all Americans. Sometimes confidentiality is the best protection a whistleblower has. Unfortunately, neither my staff nor Chairman Leahy’s staff was able to learn more, because only about ten minutes into the briefing, the FBI abruptly walked out. FBI officials simply refused to discuss any whistleblower implications in its Insider Threat Program and left the room. These are clearly not the actions of an agency that is genuinely open to whistleblowers or whistleblower protection.
Note that not all code, even in the same project, is equally exposed. It’s tempting to say it’s a needle in a haystack. But I promise you this: Anybody patches Linux/net/ipv4/tcp_input.c (which handles inbound network for Linux), a hundred alerts are fired and many of them are not to individuals anyone would call friendly. One guy, one night, patched OpenSSL. Not enough defenders noticed, and it took Neel Mehta to do something.
We fix that, or this happens again. And again. And again.
No more accidental finds. The stakes are just too high.
A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interest of efficiency on production runs. Unanimously, they urged us not to—they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.
– C. A. R. Hoare, from his Turing Award speech 34 years ago
Microsoft is not unique in claiming the right to read users’ emails – Apple, Yahoo and Google all reserve that right as well, the Guardian has determined.
The broad rights email providers claim for themselves has come to light following Microsoft’s admission that it read a journalist’s Hotmail account in an attempt to track down the source of an internal leak. But most webmail services claim the right to read users’ email if they believe that such access is necessary to protect their property.
The senior lawyer for the National Security Agency stated unequivocally on Wednesday that US technology companies were fully aware of the surveillance agency’s widespread collection of data, contradicting months of angry denials from the firms.
Rajesh De, the NSA general counsel, said all communications content and associated metadata harvested by the NSA under a 2008 surveillance law occurred with the knowledge of the companies – both for the internet collection program known as Prism and for the so-called “upstream” collection of communications moving across the internet.
Asked during a Wednesday hearing of the US government’s institutional privacy watchdog if collection under the law, known as Section 702 or the Fisa Amendments Act, occurred with the “full knowledge and assistance of any company from which information is obtained,” De replied: “Yes.”
When the Guardian and the Washington Post broke the Prism story in June, thanks to documents leaked by whistleblower Edward Snowden, nearly all the companies listed as participating in the program – Yahoo, Apple, Google, Microsoft, Facebook and AOL – claimed they did not know about a surveillance practice described as giving NSA vast access to their customers’ data. Some, like Apple, said they had “never heard” the term Prism.
De explained: “Prism was an internal government term that as the result of leaks became the public term,” De said. “Collection under this program was a compulsory legal process, that any recipient company would receive.”
I think there’s a good case to be made for security as an exercise in public health. It sounds weird at first, but the parallels are fascinating and deep and instructive.
Last year, when I finished that talk in Seattle, a talk about all the ways that insecure computers put us all at risk, a woman in the audience put up her hand and said, “Well, you’ve scared the hell out of me. Now what do I do? How do I make my computers secure?”
And I had to answer: “You can’t. No one of us can. I was a systems administrator 15 years ago. That means that I’m barely qualified to plug in a WiFi router today. I can’t make my devices secure and neither can you. Not when our governments are buying up information about flaws in our computers and weaponising them as part of their crime-fighting and anti-terrorism strategies. Not when it is illegal to tell people if there are flaws in their computers, where such a disclosure might compromise someone’s anti-copying strategy.
But: If I had just stood here and spent an hour telling you about water-borne parasites; if I had told you about how inadequate water-treatment would put you and everyone you love at risk of horrifying illness and terrible, painful death; if I had explained that our very civilisation was at risk because the intelligence services were pursuing a strategy of keeping information about pathogens secret so they can weaponise them, knowing that no one is working on a cure; you would not ask me ‘How can I purify the water coming out of my tap?’”
Because when it comes to public health, individual action only gets you so far. It doesn’t matter how good your water is, if your neighbour’s water gives him cholera, there’s a good chance you’ll get cholera, too. And even if you stay healthy, you’re not going to have a very good time of it when everyone else in your country is striken and has taken to their beds.
If you discovered that your government was hoarding information about water-borne parasites instead of trying to eradicate them; if you discovered that they were more interested in weaponising typhus than they were in curing it, you would demand that your government treat your water-supply with the gravitas and seriousness that it is due.
Developers from the Replicant project (a free Android offshoot) have documented a serious software back-door in Samsung’s Android phones, which “provides remote access to the data stored on the device.” They believe it is “likely” that the backdoor could provide “over-the-air remote control” to “access the phone’s file system.”
At issue is Samsung’s proprietary IPC protocol, used in its modems. This protocol implements a set of commands called “RFS commands.” The Replicant team says that it can’t find “any particular legitimacy nor relevant use-case” for adding these commands, but adds that “it is possible that these were added for legitimate purposes, without the intent of doing harm by providing a back-door. Nevertheless, the result is the same and it allows the modem to access the phone’s storage.”
Hundreds of open source packages, including the Red Hat, Ubuntu, and Debian distributions of Linux, are susceptible to attacks that circumvent the most widely used technology to prevent eavesdropping on the Internet, thanks to an extremely critical vulnerability in a widely used cryptographic code library.
The bug in the GnuTLS library makes it trivial for attackers to bypass secure sockets layer (SSL) and Transport Layer Security (TLS) protections available on websites that depend on the open source package. Initial estimates included in Internet discussions such as this one indicate that more than 200 different operating systems or applications rely on GnuTLS to implement crucial SSL and TLS operations, but it wouldn’t be surprising if the actual number is much higher. Web applications, e-mail programs, and other code that use the library are vulnerable to exploits that allow attackers monitoring connections to silently decode encrypted traffic passing between end users and servers.
The bug is the result of commands in a section of the GnuTLS code that verify the authenticity of TLS certificates, which are often known simply as X509 certificates. The coding error, which may have been present in the code since 2005, causes critical verification checks to be terminated, drawing ironic parallels to the extremely critical “goto fail” flaw that for months put users of Apple’s iOS and OS X operating systems at risk of surreptitious eavesdropping attacks. Apple developers have since patched the bug.
Having read the code I can’t but help feel like it has the same level of plausible deniability when it comes to the question “is this on purpose”. And that worries me.
Britain’s surveillance agency GCHQ, with aid from the US National Security Agency, intercepted and stored the webcam images of millions of internet users not suspected of wrongdoing, secret documents reveal.
GCHQ files dating between 2008 and 2010 explicitly state that a surveillance program codenamed Optic Nerve collected still images of Yahoo webcam chats in bulk and saved them to agency databases, regardless of whether individual users were an intelligence target or not.
In one six-month period in 2008 alone, the agency collected webcam imagery – including substantial quantities of sexually explicit communications – from more than 1.8 million Yahoo user accounts globally.
The document estimates that between 3% and 11% of the Yahoo webcam imagery harvested by GCHQ contains “undesirable nudity”. Discussing efforts to make the interface “safer to use”, it noted that current “naïve” pornography detectors assessed the amount of flesh in any given shot, and so attracted lots of false positives by incorrectly tagging shots of people’s faces as pornography.
Jeffrey Grossman, on Twitter:
I have confirmed that the SSL vulnerability was introduced in iOS
6.0. It is not present in 5.1.1 and is in 6.0.
According to slide 6 in the leaked PowerPoint deck on NSA’s PRISM program, Apple was “added” in October 2012.
These three facts prove nothing; it’s purely circumstantial. But the shoe fits.
No Swiss fighter jets were scrambled Monday when an Ethiopian Airlines co-pilot hijacked his own plane and forced it to land in Geneva, because it happened outside business hours, the Swiss airforce said.
“Working for the TSA,” I wrote, “has the feel of riding atop the back of a large, dopey dog fanatically chasing its tail clockwise for a while, then counterclockwise, and back again, ad infinitum.”
The National Security Agency has told Sen. Bernie Sanders (I-Vt.) that it can not answer his question about whether it collects information on members of Congress because doing so would violate the law.
In a letter to Sanders, which was obtained by The Huffington Post, Gen. Keith Alexander, who heads the agency, insisted that nothing the NSA “does can fairly be characterized as ‘spying on Members of Congress or American elected officials.’” But Alexander wouldn’t go more in depth than that, arguing that he would be violating the civilian protections of the program if he did.
“Among those protections is the condition that NSA can query the metadata only based on phone numbers reasonably suspected to be associated with specific foreign terrorist groups,” Alexander wrote. “For that reason, NSA cannot lawfully search to determine if any records NSA has received under the program have included metadata of the phone calls of any member of Congress, other American elected officials, or any other American without the predicate.”
As a key part of a campaign to embed encryption software that it could crack into widely used computer products, the U.S. National Security Agency arranged a secret $10 million contract with RSA, one of the most influential firms in the computer security industry, Reuters has learned.
Documents leaked by former NSA contractor Edward Snowden show that the NSA created and promulgated a flawed formula for generating random numbers to create a “back door” in encryption products, the New York Times reported in September. Reuters later reported that RSA became the most important distributor of that formula by rolling it into a software tool called Bsafe that is used to enhance security in personal computers and many other products.
Undisclosed until now was that RSA received $10 million in a deal that set the NSA formula as the preferred, or default, method for number generation in the BSafe software, according to two sources familiar with the contract. Although that sum might seem paltry, it represented more than a third of the revenue that the relevant division at RSA had taken in during the entire previous year, securities filings show.
The National Security Agency has made repeated attempts to develop attacks against people using Tor, a popular tool designed to protect online anonymity, despite the fact the software is primarily funded and promoted by the US government itself.
Top-secret NSA documents, disclosed by whistleblower Edward Snowden, reveal that the agency’s current successes against Tor rely on identifying users and then attacking vulnerable software on their computers. One technique developed by the agency targeted the Firefox web browser used with Tor, giving the agency full control over targets’ computers, including access to files, all keystrokes and all online activity.
But the documents suggest that the fundamental security of the Tor service remains intact. One top-secret presentation, titled ‘Tor Stinks’, states: “We will never be able to de-anonymize all Tor users all the time.” It continues: “With manual analysis we can de-anonymize a very small fraction of Tor users,” and says the agency has had “no success de-anonymizing a user in response” to a specific request.
Roger Dingledine, the president of the Tor project, said the NSA’s efforts serve as a reminder that using Tor on its own is not sufficient to guarantee anonymity against intelligence agencies – but showed it was also a great aid in combating mass surveillance.
“The good news is that they went for a browser exploit, meaning there’s no indication they can break the Tor protocol or do traffic analysis on the Tor network,” Dingledine said. “Infecting the laptop, phone, or desktop is still the easiest way to learn about the human behind the keyboard.
Adobe said Thursday that it recently suffered a massive security breach which compromised the IDs, passwords, and credit card information of nearly three million customers.
“Our investigation currently indicates that the attackers accessed Adobe customer IDs and encrypted passwords on our systems. We also believe the attackers removed from our systems certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders,” Brad Arkin, Adobe’s chief security officer, wrote in a security alert.
They probably had an older version of Flash installed….
FBI agents put this pressure on ACLU clients Abe Mashal, a Marine veteran; Amir Meshal; and Nagib Ali Ghaleb. Each of these Americans spoke to FBI agents to learn why they were suddenly banned from flying and to clear up the errors that led to that decision. Instead of providing that explanation or opportunity, FBI agents offered to help them get off the No-Fly List—but only in exchange for serving as informants in their communities.Our clients refused.
The ACLU’s report,Unleashed and Unaccountable: The FBI’s Unchecked Abuse of Authority, explains what happened to Nagib Ali Ghaleb. Nagib was denied boarding when trying to fly home to San Francisco after a trip to visit family in Yemen. Stranded abroad and desperate to return home, Nagib sought help from the U.S. embassy in Yemen and was asked to submit to an FBI interview. FBI agents offered to arrange for Nagib to fly back immediately to the United States if he would agree to tell the agents who the “bad guys” were in Yemen and San Francisco. The agents insisted that Nagib could provide the names of people from his mosque and the San Francisco Yemeni community. The agents said they would have Nagib arrested and jailed in Yemen if he did not cooperate, and that Nagib should “think about it.” Nagib, however, did not know any “bad guys” and therefore refused to spy on innocent people in exchange for a flight home.
Nagib’s experience is far from unique. After Abe Mashal was denied boarding at Chicago’s Midway Airport, FBI agents questioned him about his religious beliefs and practices.The agents told Abe that if he would serve as an informant for the FBI, his name would be removed from the No-Fly List and he would receive compensation. When Abe refused, the FBI promptly ended the meeting.
Neither Nagib nor Abe present a threat to aviation security. But FBI agents sought to exploit their fear, desperation, and confusion when they were most vulnerable, and to coerce them into working as informants. Moreover, the very fact that FBI agents asked Nagib and Abe to spy on people for the government is yet another indication that the FBI doesn’t actually think either man is a suspected terrorist. This abusive use of a government watch list underscores the serious need for regulation, oversight, and public accountability of an FBI that has become unleashed and unaccountable.
Documents from the archive of whistleblower Edward Snowden indicate that Britain’s GCHQ intelligence service was behind a cyber attack against Belgacom, a partly state-owned Belgian telecoms company. A “top secret” Government Communications Headquarters (GCHQ) presentation seen by SPIEGEL indicate that the goal of project, conducted under the codename “Operation Socialist,” was “to enable better exploitation of Belgacom” and to improve understanding of the provider’s infrastructure.
The presentation is undated, but another document indicates that access has been possible since 2010. The document shows that the Belgacom subsidiary Bics, a joint venture between Swisscom and South Africa’s MTN, was on the radar of the British spies.
Belgacom, whose major customers include institutions like the European Commission, the European Council and the European Parliament, ordered an internal investigation following the recent revelations about spying by the United States’ National Security Agency (NSA) and determined it had been the subject of an attack. The company then referred the incident to Belgian prosecutors. Last week, Belgian Prime Minister Elio di Rupo spoke of a “violation of the public firm’s integrity.”
When news first emerged of the cyber attack, suspicions in Belgium were initially directed at the NSA. But the presentation suggests that it was Belgium’s own European Union partner Britain that is behind “Operation Socialist,” even though the presentation indicates that the British used spying technology for the operation that the NSA had developed.
Yes, this is a conspiracy theory. But I’m not willing to discount such things anymore. That’s is the worst thing about the NSA’s actions. We have no idea whom we can trust.
The federal agency charged with recommending cybersecurity standards said Tuesday that it would reopen the public vetting process for an encryption standard, after reports that the National Security Agency had written the standard and could break it.
“We want to assure the I.T. cybersecurity community that the transparent, public process used to rigorously vet our standards is still in place,” The National Institute of Standards and Technology said in a public statement. “N.I.S.T. would not deliberately weaken a cryptographic standard.”
The announcement followed reports published by The New York Times, The Guardian and ProPublica last Thursday about the N.S.A.’s success in foiling much of the encryption that protects vast amounts of information on the Web. The Times reported that as part of its efforts, the N.S.A. had inserted a back door into a 2006 standard adopted by N.I.S.T. and later by the International Organization for Standardization, which counts 163 countries as members.
For encryption to be secure, the system must generate secret prime numbers randomly. That random number generation process — which is based on mathematical algorithms — makes it practically impossible for an attacker, or intelligence agency, to predict the scrambling protocols that would allow it to unscramble an encrypted message.
But internal memos leaked by a former N.S.A. contractor, Edward Snowden, suggest that the N.S.A. generated one of the random number generators used in a 2006 N.I.S.T. standard — called the Dual EC DRBG standard — which contains a back door for the N.S.A. In publishing the standard, N.I.S.T. acknowledged “contributions” from N.S.A., but not primary authorship.
In security, the worst case—the thing you most want to avoid—is thinking you are secure when you’re not. And that’s exactly what the NSA seems to be trying to perpetuate.
Suppose you’re driving a car that has no brakes. If you know you have no brakes, then you can drive very slowly, or just get out and walk. What is deadly is thinking you have the ability to stop, until you stomp on the brake pedal and nothing happens. It’s the same way with security: if you know your communications aren’t secure, you can be careful about what you say; but if you think mistakenly that you’re safe, you’re sure to get in trouble.
So the problem is not (only) that we’re unsafe. It’s that “the N.S.A. wants to keep it that way.” The NSA wants to make sure we remain vulnerable.
Of course, we “have been assured by Internet companies” that we are safe. It’s always wise to be wary of vendors’ security assurances—there’s a lot of snake oil out there—but this news calls for a different variety of skepticism that doubts the assurances of even the most earnest and competent companies. This is going to put U.S. companies at a competitive disadvantage, because people will believe that U.S. companies lack the ability to protect their customers—and people will suspect that U.S. companies may feel compelled to lie to their customers about security.
The worst news of all, in my view, is that the NSA has taken active steps to undermine public encryption standards.
The U.S. government spied on Brazil’s state-controlled oil company, Petroleo Brasileiro SA, Globo TV reported, citing classified documents obtained by former intelligence contractor Edward Snowden.
The television network, which reported a week ago that the U.S. National Security Agency intercepted phone calls and e-mails of Brazilian President Dilma Rousseff, aired slides from an NSA presentation from 2012 that explained the agency’s capability to penetrate private networks of companies such as Petrobras, as the oil company is known, and Google Inc.
One slide in the presentation listed “economic” as an intention for spying, as well as diplomatic and political reasons. None of the documents revealed the motivation for the alleged spying on Petrobras, according to Globo.
The presentation appears to contradict a statement made by an NSA spokesman to the Washington Post in an August 30 article, in which the agency said that the U.S. Department of Defense “does not engage in economic espionage in any domain, including cyber.”
Petrobras declined to comment in an e-mailed response to questions. An official at the NSA told Globo that the agency gathers economic information in order to monitor for signs of potential instability in financial markets, and not to steal commercial secrets, according to tonight’s program.
Apparently Petrobas is a hotbed of financial instability. They’re probably the single cause behind the 2008 meltdown of the financial markets.
On the Cryptography mailing list, John Gilmore (co-founder of pioneering ISP The Little Garden and the Electronic Frontier Foundation; early Sun employee; cypherpunk; significant contributor to GNU/Linux and its crypto suite; and all-round Internet superhero) describes his interactions with the NSA and several obvious NSA stooges on the IPSEC standardization working groups at the Internet Engineering Task Force. It’s an anatomy of how the NSA worked to undermine and sabotage important security standards. For example, “NSA employees explicitly lied to standards committees, such as that for cellphone encryption, telling them that if they merely debated an actually-secure protocol, they would be violating the export control laws unless they excluded all foreigners from the room (in an international standards committee!).”
The files show that the National Security Agency and its UK counterpart GCHQ have broadly compromised the guarantees that internet companies have given consumers to reassure them that their communications, online banking and medical records would be indecipherable to criminals or governments.
The agencies, the documents reveal, have adopted a battery of methods in their systematic and ongoing assault on what they see as one of the biggest threats to their ability to access huge swathes of internet traffic – “the use of ubiquitous encryption across the internet”.
Those methods include covert measures to ensure NSA control over setting of international encryption standards, the use of supercomputers to break encryption with “brute force”, and – the most closely guarded secret of all – collaboration with technology companies and internet service providers themselves.
Through these covert partnerships, the agencies have inserted secret vulnerabilities – known as backdoors or trapdoors – into commercial encryption software.
The TSA is allowed to lie in its responses to Freedom of Information Requests. Its court-granted ability to lie to the public it nominally serves isn’t limited to sensitive issues, either: they’re allowed to pretend that they don’t have CCTV footage of their own officers violating their own policies, even when they do.