Jupiter Broadcasting

Future SSL | TechSNAP 37

Find out what major infrastructure software uses the admin password of “100”, plus future improvements to SSL, how the CIA keeps their IT guys trustworthy, and…

An epic tech war story!!

All that and more, on this week’s TechSNAP.

Thanks to:

GoDaddy.com Use our codes TechSNAP10 to save 10% at checkout, or TechSNAP20 to save 20% on hosting!

Free Private Registration

GoDaddy Offer Code: techsnap17
Link: https://www.godaddy.com/domainaddon/private-registration.aspx?isc=techsnap17

$1.99 hosting for the first 3 months

GoDaddy Offer Code: techsnap11

20% off .xxx domains

Code: techsnapx

Direct Download Links:

HD Video | Large Video | Mobile Video | MP3 Audio | OGG Audio | YouTube

   
Subscribe via RSS and iTunes:

Show Notes:

Siemens lied about critical flaws in SCADA software

  • The SIMATIC systems have a major flaw in the authentication system that allows an attacker to entirely bypass authentication, accessing the control software without a username or password
  • If a user changes the password to something with a special character in it, the system may automatically reset the password to ‘100’
  • The Siemens system was the target of the Stuxnet attack, the most sophisticated virus/worm ever seen, yet the Siemens system is rather trivial to break in to
  • The values of the session cookies used by the Siemens system can be predicted after some analysis, allowing the attacker to authenticate themselves without any credentials
  • The researcher (Bill Rios, who works for Google) discovered this issue in May, and reported it to Siemens. Siemens had acknowledged the problem when it was reported.
  • Later, Siemens PR department told a Reuters reporter that “there are no open issues regarding authentication bypass bugs at Siemens,”
  • The SIMATIC system has 3 interfaces, Web, VNC and Telnet (why? Telnet is insecure). All three interfaces uses separate credentials, all defaulting to ‘100’. If a user changes the web password, they may not realize that the VNC password is still the default
  • The SCADA system at a water and sewage treatment plant in Texas was compromised by an attacker who found the system to be using a 3 character password (possibly the ‘100’ described above)
  • Addition In-Depth Coverage

Shorter warranties of desktop hard drives

  • Western Digital and Seagate have announced that drives sold in the new year may have significantly shorter warranties
  • Most desktop hard drives will see their warranties cut. Higher end and Near Line drives may see reductions
  • Western Digital drives (Green/Blue editions and others), except the Black editions, will drop from 3 years to 2. Black Edition, VelociRaptor and Enterprise products will continue to have 5 year warranties.
  • Seagate desktop and laptop drives (Barracuda, Barracuda Green,
    Momentus 2.5”) will see their industry leading 5 year warranties cut to only 1 year
  • Seagate’s specialty Video and Surveillance drives (SV35 Series, Pipeline HD/HD Mini) will feature 2 year warranties
  • Seagate’s higher end drives (Barracuda XT, and the hybrid Momentus XT) as well as near line drives (Constellation 2/ES/ES2) will come with 3 year warranties
  • Seagate enterprise drives, such as the Cheetah series, will retain their 5 year warranty
  • Seagate recently purchases Samsung’s hard drive business, so warranties on the remaining product lines to carry the samsung name will also be reduced
  • Original Coverage

New SSL CA Requirements Published

  • In an effort to solve issues that have plagues the SSL Certificate system this year, a new set of requirements has been put together
  • The goal is to establish a new set of criteria that vendors will use when deciding which CAs to trust. This list distributed as part of web browsers, operating systems and other SSL clients, is inherently important to the PKI
  • The CA/Browser forum is made up of major CAs such as Comodo, CyberTrust, Entrust, GeoTrust, GlobalSign, GoDaddy, Network Solutions, RSA Security, StartCom, Symantec, Thawte and Verizon. (Interestingly, VeriSign does not appear on the list). The Relying-Parties include Apple, Google, Microsoft, Mozilla, RIM, KDE, and Opera
  • The policy strictly spells out the duties of the CA, such as verifying that the user requesting the certificate actually has control over and the right to use the Domains and IP Addresses listed on the certificate (Earlier this year, certificates for domains such as google.com and mail.yahoo.com were incorrectly issued to an attacker)
  • CAs must also make efforts to ensure the information on the certificate is correct, and not misleading (with the advent of internationalized domain names, it was possible to get a certificate for a domain that looked like paypal.com, but was actually spelled with a unicode character that looks very much like the letter a)
  • All CAs much provide a 24×7 publicly accessible repository of status information about all certificates (whether the certificate has been revoked, etc)
  • Certificates will no longer be allowed to be issues for internal IP addresses (such as 192.168.0.0/24 or 10.0.0.0/8). New certificates with internal IPs cannot be issued after November 2015, and all existing certificates will be revoked October 2016
  • The common name field is deprecated in favour of the subjectAltNames field.
  • Certificates can no have an expiration date of more than 60 months. Beyond April 2015, any certificate with an expiration date greater than 39 months requires special documentation
  • Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates

    How Does the CIA Keep Its IT Staff Honest?

  • “Once you’re in, there are frequent reinvestigations, but it’s just part of process here,” says Tarasiuk, who also gets polygraphed regularly
  • There’s so much top secret information contained within the CIA’s systems that IT plays a key infosecurity role in making sure that CIA employees are not doing anything nefarious.
  • “They are very concerned about foreign intelligence services that are interested in penetrating the CIA. Because of that we pay particular attention to the kinds of things we put on our network.”
  • The CIA’s networks aren’t directly connected to the internet. “We have a very closed network that’s connected to an intelligence community enterprise,” Tarasiuk says, “so I don’t necessarily have the worries about the hackers from the internet trying to break through.”

Feedback

Q: (Markus) I have a small company network. (About 5 clients 1 windows 4 linux). Your War story about Bacula was very interesting. I’m interested in building a dedicated bacula server for my backups. Do you know a entry level barebones system that supports the latest FreeBSD and can handle 3 drives (ZFS). Can I just grab a Intel Atom barebones and it is going to work?

A: An Atom based system would likely work well for that, you don’t really need all that much performance to do backups, so even the slower RAM, lack of cache/queue depth, and typically weaker SATA controller really won’t be an issue for a backup server. I don’t have any advice on a specific model or anything, the SuperMicro barebones Atom servers are nice, but they are typically space-saver type deals that won’t fit more than 1 disk, and may be over priced for what you want. Chris’ Bitcoin Atom Parts List

Atom board with 8GB of RAM Support


War Story

This weeks War Story comes in from long time JB viewer Irish_Darkshadow (The other, other Alan)

Setting:
IBM has essentially two “faces”, one is the commercial side that deals with all of the clients and the other is a completely internal organisation called the IGA (IBM Global Account) that provides IT infrastructure and support to all parts of IBM engaged with commercial business.

There are sites located in key geographies which then provide that support for their regions and at a rudimentary level, those sites act as failover for each other.

Each of those sites has a team that deals with Incident / Problem / Change Management functions in addition to Crit Sit (critical situations handling) and communications around those disciplines. Sometimes events take place that require multiple sites to cooperate in order to handle certain situations.

The events described below took place between August 14th and 15th of 2003.

War Story:

The EMEA (Europe / Middle East / Africa) CSC (Customer Support Centre) site was based in Dublin, Ireland at the time. The site management arranged to have a night out on the town for the entire location as a sort of “end of summer” event. I was working for the crit sit team at that point and happened to be designated as the “on call” guy that night. Being an Irishman with a healthy liking for the odd alcoholic beverage I was a bit miffed at having to attend such an event and not being able to imbibe.

While at the event I then set about blagging as many vouchers for free drinks as possible to give to my team and I hassled every management person I could see to get the job done. At one point I went up to the bar to get a round for my team and realised that I was standing beside the on call Duty Manager. If something kicked off at work, I would be the first person called and if I needed management support to get things done, this Duty Manager would have been my first call thereafter. My next realisation was that the Duty Manager was knocking back cocktails to beat the band. I questioned this and got one of those “meh, what’s the worst that could happen” responses. My first mistake that night was that I took her response as an implicit “all clear” to have some drinks myself. Several rounds later at around 2am, I decided to have my girlfriend drive me home as she was on soft drinks that night. I arrived home, very drunk at around 2:35 and was dead to the world about 10 seconds after my head hit the pillow. And that’s where things take a turn for the worst.

I awake at 3:20 to the wonderful melody of the on call mobile phone. Upon eventually figuring out how to answer the phone and then hold it the right way up, I was greeted by an overly enthusiastic support agent. Apparently “some guy” from the US had called in to the EMEA CSC site to request that our Dublin Executive join some conference call in the middle of the night (at least for Dublin). Through the fog of alcohol induced indecision, I somehow managed to realise that this meant contacting the cocktail loving Duty Manager to get approval to wake up the Executive (ya gotta love big blue bureaucracy). I gave my permission to the support agent to make that call for me while I located a cold shower and a source of caffeine. During the following minutes I realised that the cocktail loving duty manager would probably not answer her phone and that I would likely be getting another call. In preparation I went down to the kitchen….impressively staying upright despite my blood alcohol level. Tea was the only option available to me and some toast to soak up some of the sweet, sweet booze in my belly. The phone rang again and it was time to get an update…..as expected, the agent was unable to contact the Duty Manager and so I gave permission for him to call the Executive directly giving instructions for her to call me. Just before hanging up I walked into my living room, turned on the TV and there on the news channel I saw “US power outage – 16 million east coast homes without power”. I had a sudden sinking feeling when I realised that the little graphic they showed covered an area which included some major IBM locations: Research Triangle Park (RTP in North Carolina), IBM Headquarters in Armonk, New York and also MOB North in Toronto. The shit was truly about to hit the fan and if I wasn’t under the influence of alcohol at that point, I likely would have been more worried. Instead, I managed to explain to the agent on the phone what I believed the situation was and how to proceed. I knew that I would have to get to the office and the local taxi service told me that they had no cars available for at least 90 mins. I made the long climb back upstairs….nudged the already miffed girlfriend and requested a lift to work 😀 . After much moaning, she decided she would just start work early anyways and off we went.

Upon arrival at the EMEA CSC site I started organising calls to sort out a plan for handling the initial problems. With those US and Canada sites being offline we would have to activate contingency plans in other geographies to cover them. Within the hour we had established that only the Toronto site had not failed over onto backup power. The site was primarily a call taking center and that meant I needed to arrange for staff on our site to come in early, cancel all native language support in favour of english only support and then assess workload incoming versus emergency capacity. Oh alcohol, how you did tease me with these conundrums in the middle of the night!

I called Toronto personally to speak with my counterpart there in order to get an update on why they were unable to get over to backup power. Each site typically has a diesel generator in their disaster recovery plans for just such an eventuality. The Toronto site manager was able to explain to me that the diesel generator simply had not kicked in and they were investigating. I requested 15 minute update calls from that point onwards. The first call exposed that the primary reason for their backup generator failing was that nobody had thought to put any frickin’ diesel in the damn thing! I requested that they arrange for an emergency supply to be procured and get back to me on the next call with an outlook. The next call never happened 15 minutes later but the following one did (30 mins after I asked for a diesel supply). The Toronto site manager then explained that a supply was en route and would be there in less than an hour. It was about 05:30 for me at that point and I was sobering up fast. I agreed to put off the next update call for an hour while I prepared on our side.

I had to assume that the diesel would be a failure and that meant I needed to arrange for staff to be called, woken up and summoned to work. This included calling in people off vacation and basically staffing for an apocalyptic onslaught of incoming work to handle the overflow from Toronto. Preparations were going well on that front despite the inconvenience to our staff who were being rudely awoken with the wonderful news.

When it came time to speak with Toronto again, nobody answered. Fifteen minutes later….still no answer. This went on for about 45 minutes before I got the site manager on the line. The conversation went something like this:

Me: Ok, where the hell have you been for the last 45 minutes?!?!

Toronto: I’m at the compound with the diesel truck.

Me: That doesn’t exactly answer my question. Are you guys up and running now?

Toronto: No, the truck guy says that it will take up to an hour to fill the generator and it cannot be switched on until that is done.

Me: Ok, that’s good news. So in an hour or so you guys will be powered up and my staff only need to cover that time for you. Excellent, I’ll inform the Execs.

Toronto: Eh, I wouldn’t do that just yet.

Me: Why not?

Toronto: There’s another problem.

Me: You have my undivided attention.

Toronto: We can’t actually get to the backup generator to fill it with diesel.

Me: I think that warrants further explanation.

Toronto: The gate to the compound that surrounds the generator…well…..it’s electrically powered!

And there you have it folks, in IT support when you see high level disaster recovery plans being put in place. Maybe somebody with some common sense should take a look over them and ensure that a crucial, diesel backup generator actually has fuel in it and that it can be accessed in the event of a power outage! (and never, ever get drunk when you’re the on call guy).


Round Up: