Monthly Archives: July 2012

Distributed Denial of Service ‘DDoS’ blackmailers busted in cross-border swoop

Cyber hoodlums targeted gold and silver traders Chinese and Hong Kong cops are hailing another success in their cross-border cyber policing efforts with the scalp of a high profile DDoS blackmail gang which targeted gold, silver and securities traders in the former British colony. Six cyber hoodlums were arrested on the mainland in Hunan, Hubei, Shanghai and other locations at the end of June, according to a report in local Hong Kong rag The Standard. Some 16 Hong Kong-based firms including the Chinese Gold & Silver Exchange were targeted in the scheme designed to blackmail them to the tune of 460,000 yuan (£46,200). The gang apparently threatened to cripple their victims’ web operations with distributed denial of service (DDoS) attacks if they didn’t cough up. Four of the targeted firms transferred funds totalling 290,000 yuan (£29,150) into designated bank accounts in mainland China, the report said. A source also told The Standard that some of the victims may have been involved in some shady dealings themselves, which made them more reluctant to seek police help. Roy Ko, centre manager of the Hong Kong Computer Emergency Response Team (HKCERT) told The Reg that the arrests are an indication of improving cross-border cyber policing efforts. “Working with counterparts cross border is always a challenge because of different practices, languages, different time zones and so on. Usually, HK and the mainland maintain a good working relationship, just like the HKCERT and CNCERT,” he said. “Because we are in the same time zone, the response is usually quicker than working with the US, for example, where we have to wait until the next day to get a response.” Ko also warned that the attacks show this form of cyber threat is still a popular one for avaricious criminal gangs. “Firms have to assess whether they are a probable target of such an attack – ie whether they rely heavily on the internet to do business – and then prepare countermeasures,” he added. “Subscribing to an anti-DDoS service may be part of the protection strategy in addition to anti-malware, firewall, etc.” Hong Kong businesses have been warned before that they’re fair game to hackers from neighbouring China. Source: http://www.theregister.co.uk/2012/07/04/hong_kong_china_bust_ddos_gang_blackmail/

Excerpt from:
Distributed Denial of Service ‘DDoS’ blackmailers busted in cross-border swoop

Legal blog site suffered Distributed Denial of Service ‘DDoS’ attack

When a blog that typically attracts 30,000 visitors a day is hit with 5.35 million, its operators had better have been prepared for what seems way too big to be called a spike. The popular SCOTUSblog, which provides news and information about the United States Supreme Court, was put to this test last week after the historic healthcare ruling and it passed with flying colors, thanks to months of planning and a willingness to spend $25,000. “We knew we needed to do whatever it took to make sure we were capable of handling what we knew would be the biggest day in this blog’s history,” says Max Mallory, deputy manager of the blog, who coordinates the IT. The massive traffic spike was somewhat of a perfect storm for SCOTUSblog, which Supreme Court litigator Tom Goldstein of the Washington, D.C., boutique Goldstein & Russell founded in 2002. Not only is the site a respected source of Supreme Court news and information, but in the days leading up to the ruling, buzz about the blog itself began picking up. President Barack Obama’s press secretary named SCOTUSblog as being one source White House officials would monitor to hear news from the court. When the news broke, two of the first media organizations to report it — Fox News and CNN — got the ruling wrong. Many media outlets cited SCOTUSblog as being the first to correctly report that the Supreme Court upheld the Affordable Care Act in a 5-4 decision. But even before “decision day,” as Mallory calls it, the small team at SCOTUSblog knew Thursday would put a lot of strain on the blog’s IT infrastructure. The first indications came during the health care arguments at the Supreme Court in March, when SCOTUSblog received almost 1 million page views over the three days of deliberations. The blog’s single server at Web hosting company Media Temple just couldn’t handle the traffic. “That was enough to crash our site at various points throughout those days and it just generally kept us slow for a majority of the time the arguments were going on,” Mallory says. In the weeks leading up to the decision, Mallory worked with a hired team of developers to optimize the website’s Java code, install the latest plugins and generally tune up the site. Mallory realized that wouldn’t be enough, though. No one knew for sure when the high court would release the most anticipated Supreme Court case in years, but each day it didn’t happen there was a greater chance it would come down the next day. Traffic steadily climbed leading up to the big day: The week before the ruling the site saw 70,000 visitors. Days before the decision, the site got 100,000. “It became clear we weren’t going to be able to handle the traffic we were expecting to see when the decision was issued,” Mallory says. A week before the decision, Mallory reached out to Sound Strategies, a website optimization company that works specifically with WordPress. The Sound Strategies team worked throughout the weekend recoding the SCOTUSblog site again, installing high-end caching plugins, checking for script conflicts and cleaning out old databases from previous plugins that had been removed. The team also installed Nginx, the open source Web server, to run on the Media Temple hardware. All of the improvements helped, but when the decision did not come on Tuesday, July 26, it became clear that Thursday, July 28, the last day of the court’s term, would be decision day. Mallory was getting worried: Earlier in the week SCOTUSblog suffered a distributed denial-of-service (DDOS) attack targeting the website. That couldn’t happen on Thursday, when the court would issue the ruling. “This was our time, it just had to work,” Mallory says. The night before decision day, Mallory and Sound Strategies took drastic measures. Mallory estimated the site could see between 200,000 and 500,000 hits the next day, so the group decided to purchase four additional servers from Media Temple, which Sound Strategies configured overnight. SCOTUSblog ended up with a solution Thursday morning that had a main server acting as a centralized host of SCOTUSblog, with four satellite servers hosting cached images of the website that were updated every six minutes. A live blog providing real-time updates — which was the first to correctly report the news — was hosted by CoveritLive, a live blogging service. As 10 a.m. EDT approached, the system began being put to the test. At 10:03, the site was handling 1,000 requests per second. By 10:04 it had reached 800,000 total page views. That number climbed to 1 million by 10:10, and by 10:30 the site had received 2.4 million hits. Because of the satellite caching, Mallory says, the site was loading faster during peak traffic than it ever had before. In post-mortem reviews, Sound Strategies engineers said they found evidence of two DDoS attacks, one at 9:45 a.m. and another at 10 a.m., which the servers were able to absorb. “We built this fortress that was used basically for two hours that morning,” Mallory says. “It worked and it never slowed down.” Since the healthcare decision, SCOTUSblog has seen higher-than-normal traffic, but nowhere near the 5 million page views the site amassed on the biggest day in the blog’s history. “It was a roller coaster,” Mallory says. “You can have the best analysis, the fastest, most accurate reporting, but if your website crashes and no one can see it that moment, it doesn’t matter.” Source: http://www.arnnet.com.au/article/429473/how_legal_blog_survived_traffic_tidal_wave_after_court_healthcare_ruling/?fp=4&fpid=1090891289

Read the original post:
Legal blog site suffered Distributed Denial of Service ‘DDoS’ attack

Distributed Denial of Service `DDoS` mitigation a key component in network security

`Attacker motivations behind distributed denial-of-service attacks (DDoS) have shifted away from solely financial (for example, the extortion of online gambling sites and retailers) toward socially and politically motivated campaigns against government websites, media outlets and even small businesses. Hacktivist collectives such as Anonymous, LulzSec and others have used DDoS attacks to damage a target’s reputation or revenue since December 2010 when Anonymous began targeting corporate websites that opposed Wikileaks. At that time, attacks were conducted using botnets to flood sites’ servers with large quantities of TCP or UDP packets, effectively shutting down the sites for hours at a time. Today, botmasters have begun to use more complex strategies that focus on specific areas of the network, such as email servers or Web applications. Others divert security teams’ attention with DDoS flood attacks while live hackers obtain the actual objective, valuable corporate or personal information. This tactic was utilized in the infamous attack against Sony in 2011, according to Carlos Morales, the vice president of global sales engineering and operations at Chelmsford, Mass.-based DDoS mitigation vendor Arbor Networks Inc. Rapid growth in the sophistication of DDoS attacks combined with the prevalence of attacks across markets makes for a dangerous and fluid attack landscape. Security researchers and providers agree that it’s becoming more important for companies to protect themselves from denial-of-service attacks, in addition to implementing other measures of network security. DDoS attacks can quickly cripple a company financially. A recent survey from managed DNS provider Neustar, for example, said outages could cost a company up to $10,000 per hour. Neustar’s survey, “DDoS Survey Q1 2012: When Businesses Go Dark” (.pdf), reported 75% of respondents (North American telecommunication, travel, finance, IT and retail companies who had undergone a DDoS attack) used firewalls, routers, switches or an intrusion detection system to combat DDoS attacks. Their researchers say equipment is more often part of the problem than the solution. “They quickly become bottlenecks, helping achieve an attacker’s goal of slowing or shutting you down,” the report stated. “Moreover, firewalls won’t repel attacks on the application layer, an increasingly popular DDoS vector.” For those reasons, experts suggest companies with the financial and human resources incorporate DDoS-specific mitigation technology or services into their security strategy. Service providers such as Arbor Networks, Prolexic and others monitor traffic for signs of attacks and can choke them off before downtime, floods of customer support calls, and damage to brand or reputation occur. Purchasing DDoS mitigation hardware requires hiring and training of employees with expertise in the area, but experts say that can be even more expensive. “In general, it’s very hard to justify doing self-mitigation,” said Ted Swearingen, the director of the Neustar security operations center. All the additional steps a company has to take to implement their own DDoS mitigation tool, such as widening bandwidth, increasing firewalls, working with ISPs, adding security monitoring and hiring experts to run it all, make it a cost-ineffective strategy in the long term, he said.  Three percent of the companies in Neustar’s survey reported using that type of protection. In some cases, smaller DDoS mitigation providers even turn to larger vendors for support when they find themselves facing an attack too large, too complex or too new to handle on their own. Secure hosting provider VirtualRoad.org is an example. The company provides protection from DDoS attacks for independent media outlets in countries facing political and social upheaval—places where censorship by the government or other sources is rampant, such as Iran, Burma and Zimbabwe. A specific niche like that in a narrow market with small clients doesn’t usually require extra support, but VirtualRoad.org has utilized its partnership with Prolexic a few times in the last year, according to CTO Tord Lundström. They have their infrastructure to deal with attacks, Lundström said, but they also have parameters for the volume and complexity that they can handle. When it gets to be too much, they route the traffic to Prolexic, a security firm that charges a flat fee regardless of how many times you are attacked. “It’s easy to say, ‘We’ll do it when an attack comes,’ and then when an attack comes they say, ‘Well, you have to pay us more or we won’t protect you,’” Lundström said of other services. Extra fees like that are often the reason why those who need quality DDoS protection, especially small businesses like VirtualRoad.org clients, can’t afford it, he said. The impact can be worse for companies if the DDoS attack is being used as a diversion. According to a recent survey by Arbor Networks, 27% of respondents had been the victims of multi-vector attacks. The “Arbor Special Report: Worldwide Infrastructure Security Report,” which polled 114 self-classified Tier 1, Tier 2 and other IP network operators from the U.S. and Canada, Latin/South America, EMEA, Africa and Asia, stated that not only is the complexity of attacks growing, but the size as well. In 2008, the largest observed attack was about 40 Gbps. Last year, after an unusual spike to 100 Gbps in 2010, the largest recorded attack was 60 Gbps. This denotes a steady increase in the size of attacks, but Morales of Arbor Networks believes the numbers will eventually begin to plateau because most networks can be brought down with far smaller attacks, around 10 Gbps. Even if they stop growing, however, DDoS attacks won’t stop happening altogether, Morales said. Not even the change to IPv6 will stop the barrage of daily attacks, as some were already recorded in the report. Because of the steady nature of this attack strategy, experts suggest all companies that function online prepare themselves for this type of attack by doing away with the “it won’t happen to me” attitude. Luckily, recent “hacktivist” activities have given DDoS attacks enough press that CSOs and CEOs are starting to pay attention, but that’s just the first step, Morales said. It’s important to follow through with getting the protection your business needs if you want to achieve the goal, said VirtualRoad.org’s Lundström. “The goal is to keep doing the work,” he said. Source: http://searchsecurity.techtarget.com/news/2240159017/DDoS-mitigation-a-key-component-in-network-security

Follow this link:
Distributed Denial of Service `DDoS` mitigation a key component in network security

Banking Outage Prevention Tips

A series of fresh technology shutdowns this spring at banks around the world reveals the financial services industry still has a long way to go toward ensuring full up time for networks, as well as communicating with the public about why tech glitches have happened and what is being done about them. In May, Santander, Barclays and HSBC were all hit by digital banking outages. Some customers of Barclays and Santander were unable to access accounts online for a time near the end of the month, an outage blamed largely on end-of-the-month transaction volume. At HSBC, an IT hardware failure temporarily rendered ATMs unable to dispense cash or accept card payments in the U.K. Barclays and Santander both apologized for the outages though statements, while HSBC’s approach revealed both the power and peril of social media in such cases. HSBC’s PR office took to social media to communicate updates on the outage, and to also receive criticism about the outage (HSBC, Santander and Barclays did not return queries for comment). After an earlier outage in November, HSBC had set up a social monitoring team to be more proactive about communicating with the public about tech glitches, a move that seemed to have some positive impact, as not all of the Twitter and Facebook postings about the most recent outage were complaints. The basic task of making sure the rails are working, and smoothing things over with customers when systems invariably shut down, is an even more pressing matter considering the propensity for outrage to spread quickly among the public via new channels. “One thing that’s true about outages is we’re hearing more about them. The prevalence of social media use by irate customers and even employees makes these outages more publicized,” says Jacob Jegher, a senior analyst at Celent. Jegher says the use of social media for outage communication is tough – balancing the need to communicate with customers with internal tech propriety is easier said than done. “While it’s certainly not the institution’s job nor should it be their job to go into every technical detail, it’s helpful to provide some sort of consistent messaging with updates, so customers know that the bank is listening to them,” Jegher says. National Australia Bank, which suffered from a series of periodic online outages about a year ago that left millions of people unable to access paychecks, responded with new due diligence and communications programs. In an email response to BTN, National Australia Bank Chief Information Officer Adam Bennett said the bank has since reduced incident numbers by as much as 40 percent through a project that has aimed to improve testing. He said that if an incident does occur, the bank communicates via social media channels, with regular updates and individual responses to consumers where possible. The bank also issued an additional statement to BTN, saying “while the transaction and data demands on systems have grown exponentially in recent years led by online and mobile banking, the rate of incidents has steadily declined due to a culture of continuous improvement…The team tests and uses a range of business continuity plans. While we don’t disclose the specifics, whenever possible we will evoke these plans to allow the customer experience to continue uninterrupted.” While communicating information about outages is good, it’s obviously better to prevent them in the first place. Coastal Bank & Trust, a $66 million-asset community bank based in Wilmington, N.C., has outsourced its monitoring and recovery, using disaster recovery support from Safe Systems, a business continuity firm, to vet for outage threats, supply backup server support in the event of an outage, and contribute to the bank’s preparation and response to mandatory yearly penetration and vulnerability tests. “Safe Systems makes sure that the IP addresses are accessible and helps with those scans,” says Renee Rhodes, chief compliance and operations officer for Coastal Bank & Trust. The bank has also outsourced security monitoring to Gladiator, a Jack Henry enterprise security monitoring product that scours the bank’s IT network to flag activity that could indicate a potential outage or external attack. The security updates include weekly virus scans and patches. Coastal Bank & Trust’s size – it has only 13 employees – makes digital banking a must for competitive reasons, which increases both the threat of downtime and the burden of maintaining access. “We do mobile, remote deposit capture, all of the products that the largest banks have. I am a network administrator, and one of my co-workers is a security officer. With that being said, none of us has an IT background,” Rhodes says. “I don’t know if I could put a number on how important it is to have these systems up and running.” Much of the effort toward managing downtime risk is identifying and thwarting external threats that could render systems inoperable for a period of time. Troy Bradley, chief technology officer at FIS, says the tech firm has noticed an increase in external denial of service attacks recently, which is putting the entire banking and financial services technology industries on alert for outage and tech issues with online banking and other platforms. “You’ll see a lot of service providers spending time on this. It’s not the only continuity requirement to solve, but it’s one of the larger ones,” he says. To mitigate downtime risk for its hosted solutions, FIS uses virtualization to backstop the servers that run financial applications, such as web banking or mobile banking. That creates a “copy” of that server for redundancy purposes, and that copy can be moved to another data center if necessary. “We can host the URL (that runs the web enabled service on behalf of the bank) at any data center…if we need to move the service or host it across multiple data centers we can do that…we think we have enough bandwidth across these data centers to [deal with] any kind of denial of service attack that a crook can come up with,” Bradley says. FIS also uses third party software to monitor activity at its data centers in Brown Deer, WI; Little Rock and Phoenix, searching for patterns that can anticipate a denial of service attack early and allow traffic connected to its clients to be routed to one of the other two data centers. For licensed solutions, FIS sells added middleware that performs a similar function, creating a redundant copy of a financial service that can be stored and accessed in the case of an emergency. Stephanie Balaouras, a vice president and research director for security and risk at Forrester Research, says virtualization is a good way to mitigate both performance issues, such as systems being overwhelmed by the volume of customer transactions, and operational issues such as hardware failure, software failure, or human error. “If it’s [performance], the bank needs to revisit its bandwidth and performance capacity. With technologies like server virtualization, it shouldn’t be all that difficult for a large bank to bring additional capacity online in advance of peak periods or specific sales and marketing campaigns that would increase traffic to the site. The same technology would also allow the bank to load-balance performance across all of its servers – non-disruptively. The technology is never really the main challenge, it tends to be the level of maturity and sophistication of the IT processes for capacity planning, performance management, incident management, automation, etc.,” she says. In the case of operational issues, server virtualization is still a great technology, Balaouras says, adding it allows the bank to restart failed workloads within minutes to alternate physical servers in the environment or even to another other data center. “You can also configure virtual servers in high-availability or fault-tolerant pairs across physical servers so that one hardware failure cannot take down a mission-critical application or service,” Balaouras says. Balaouras says more significant operational failures, such as a storage area network (SAN) failure, pose a greater challenge to network continuity and back up efforts. “In this case, you would need to recover from a backup. But more than likely a bank should treat this as ‘disaster’ and failover operations to another data center where there is redundant IT infrastructure,” she says. Source: http://www.americanbanker.com/btn/25_7/online-banking-outage-prevention-strategies-1050405-1.html

View article:
Banking Outage Prevention Tips