Microsoft has, in the past, had a reputation for not taking security seriously. It had previously run the company on the idea that users want features and that is where the development costs went. Security was put only in where it couldn't be avoided.
Things changed and security became a feature. Microsoft woke up and have done an amazing job of establishing a patching schedule (Patch Tuesday) and supplying tools like WSUS and MBSA to make sure that patches are rolled out with minimal issues.
Thats great for larger organisations but while my PC at work is always up-to-date and secure, my PC at home has been lagging. I feel rather safe because it is not connected to the Internet 24/7 and is firewalled when it does dial up. Yes, dial up. With a modem. I don't process any funny documents on the box so it is really in a safe world of its own.
But being a security professional I feel that I should take some time to patch the box just to be sure.
So...lets get back to that modem thing. My modem does not run at 100% and the connection is pretty faulty. In South Africa local calls are charged for so it could get quite pricey to patch my machine not to mention the amount of time that my phone at home would be engaged.
That is for my one PC... if I had others the time to download and patch would be longer.
Enter the amazing AutoPatch software. All the Microsoft Patch Happiness you can get (and other stuff too!) all on one little platter! Basically it is all the Microsoft Patches on CD with a utility to work out what is needed and deploy. Download it at work, burn it, take it home and patch patch patch. This is one amazing little package and so necessary for smaller companies and home users.
Microsoft also benefit with the bandwidth savings and happier customers (isn't that what business is all about?)
But now Microsoft have instructed AutoPatcher to remove the Microsoft patches from their site. They are quite allowed to do this under copyright law because the patches are really Microsoft patches repackaged. It means that AutoPatcher really doesn't have much of a purpose though.
I can understand the fact that Microsoft doesn't want to face legal liability if AutoPatcher breaks a third party machine but I have no idea now how I can patch my home PC quickly and easily like I was able to before.
If I were Microsoft I would have bought out AutoPatcher for less than Bill Gates makes in a day and renamed it Microsoft CDPatcher. That move would have shown that Microsoft is serious about security and cares for customers rather than serious about security only to make money.
As it stands today I think Microsoft has made a mistake.
Thursday, August 30, 2007
Monday, August 27, 2007
Dr Beetroot and the Stolen Records
This is my take on the whole Manto Tshabalala-Msimang vs The Sunday Times controversy.
Being an Information Security professional I am going to relate it as I see it. And the way I see it both the minister and the paper are correct.
For those of you who read this blog and are not from South Africa I'm going to put a bit of background down for you. If you are from South Africa you can safely skip the next little bit - you know this already.
Manto Tshabalala-Msimang is the Minister of Health and is also known as Dr Beetroot because of her criticized belief that AIDS is cured better through vegetables than medicine. This belief kills people every day and the opposition want her to leave the government because of it.
The Sunday Times newspaper is the most popular weekly newspaper in South Africa and they published an article that hinted very strongly that the minister was an alcoholic without actually saying it outright. They worked this out because of evidence that came from her medical records when she was in hospital and had alcohol when she was not supposed to.
The Minister has not denied the fact that she had alcohol while in hospital but has been upset that the Sunday Times had a copy of her medical records. (This is typical government spin doctoring; according to Nick Naylor from Thank You For Smoking: "That's the beauty of argument, if you argue correctly, you're never wrong". But thats not the point of this post.)
The point is that the Sunday Times did not steal the documentation. They merely happened to get a copy of it. And, once they had a copy, it is their duty to report on news they think the country should know about. And, of course, the whole country is following this very closely so the Sunday Times was right to publish.
So, where does information come from? That is the big question. In Information Security we have a saying "protect all the information that you don't want to read about in tomorrow's newspaper". The Sunday Times is a respectable, "non-tabloid" newspaper. I can't picture their staff crawling around in hospitals, looking for medical records or hacking into medical systems.
Somehow there was a leak in the hospital and this is who the minister should be going after but its a lot easier to sue a newspaper than a hospital especially for the minister of health who would like to pretend that all is well with patient records in hospitals.
The Minister is right that her private details should be kept private but once it is in the newspaper it is too late. It should have been protected from the start and the hospital is (in my humble opinion not being a lawyer) to blame.
If the Minister does take up the issue with the hospital then some questions may arise as to why she used a private hospital for an operation that could have been done at a public hospital and why the government does not protect patients (even at private hospitals) from having their records go missing, ending up at newspapers. Maybe California can help her out.
More on the TJX Stock
It appears that TJX have taken a bit of a knock but their share capital is $14 Billion.
This means that a hack that costs them $118 Million is peanuts. To them. It essentially ends up costing each shareholder 25c per share. I don't see any shareholder selling shares based on this hack attempt.
The other interesting thing is that while Javelin Strategy and Research said that:
according to research done in April, they have had to explain how just a couple of months later TJX has reported an increase in sales since they were hacked. Javelin explain that there is just not enough competition.
Gartner are also a bit boggled by this fact but they comment that:
I guess this proves that people are just not logical. It will probably take a lot more pain on their behalf before they say that they will avoid shopping at a store with bad card protection and then actually do it.
It also shows that TJX is just not a very good example of what effects a hacking incident can have on a business. They have a strong company, a lot of money to play with and the ability to entice customers back even after TJX has lost their private information.
This means that a hack that costs them $118 Million is peanuts. To them. It essentially ends up costing each shareholder 25c per share. I don't see any shareholder selling shares based on this hack attempt.
The other interesting thing is that while Javelin Strategy and Research said that:
"77 percent of consumers intended to stop shopping at merchants that incurred a data breach"
according to research done in April, they have had to explain how just a couple of months later TJX has reported an increase in sales since they were hacked. Javelin explain that there is just not enough competition.
Gartner are also a bit boggled by this fact but they comment that:
"Most TJX customers clearly care more about discounts than about card security, because they know banks will usually cover potential losses if a card is stolen and used, with the costs eventually shifted back to the retailers."Gartner go on to preach on how retailers should adhere to good security practices but the "... OR ELSE!" is a bit weak.
I guess this proves that people are just not logical. It will probably take a lot more pain on their behalf before they say that they will avoid shopping at a store with bad card protection and then actually do it.
It also shows that TJX is just not a very good example of what effects a hacking incident can have on a business. They have a strong company, a lot of money to play with and the ability to entice customers back even after TJX has lost their private information.
The TJX stock is just not co-operating!
According to wikipedia :
TJX ended up as the default PCI black sheep. PCI, for those not in the know, is an industry standard created by the credit card companies telling stores how to protect their customer's information, specifically credit card information.
Basically TJX did everything wrong including storing information they should never have stored in the first place. 45 million credit card numbers are now being traded on the black market because of this breach.
Net income for their 2nd quarter dropped 57% due to information security costs related to the breach.
Bad news for the company, right? Wrong. Maybe someone can explain this to me but on January 16th, 2007 the share price was $29.94. Friday's closing price was $30.75. Man, did the market get them (not)! To be fair they have underperformed the S&P500 until recently but the company does not seem to be very hurt by the breach.
The TJX Companies, Incorporated is the largest international apparel and home fashions off-price department store chain, based in Framingham, Massachusetts in the United States.
[...]
On January 17, 2007, TJX announced that it was the victim of an unauthorized computer systems intrusion.
TJX ended up as the default PCI black sheep. PCI, for those not in the know, is an industry standard created by the credit card companies telling stores how to protect their customer's information, specifically credit card information.
Basically TJX did everything wrong including storing information they should never have stored in the first place. 45 million credit card numbers are now being traded on the black market because of this breach.
Net income for their 2nd quarter dropped 57% due to information security costs related to the breach.
Bad news for the company, right? Wrong. Maybe someone can explain this to me but on January 16th, 2007 the share price was $29.94. Friday's closing price was $30.75. Man, did the market get them (not)! To be fair they have underperformed the S&P500 until recently but the company does not seem to be very hurt by the breach.
Thursday, August 16, 2007
Calif-online-crime Law
According to CSO merchants in California may end up liable for data breaches.
I think this is a good thing but I also think it is a bad thing.
Its good because a lot of large companies pay lip service to Information Security and don't take it seriously enough. This will make sure that they do. It is good because it is not the poor customer who takes the risk when he does his shopping.
Its bad because it attacks companies for essentially being victims of crime. Not does the company suffer from the crime itself but it suffers from the after effects of the crime.
On the other hand, (I think we are up to 3 by now) there is always a risk in doing business and especially a risk of crime, it has just moved online now. Companies make good profits or else they would not be doing what they are doing so they need to offset some profits into protecting themselves and their customers' information from the criminals rather than ignoring the issues and pushing the risk onto the very customers that give them money.
I guess its kinda like me locking my expensive car and keeping the keys in my pocket but borrowing a friend's cheap car and leaving it unlocked and motor running in the street because, hey, its not my car.
This law is receiving strong opposition but I think it will be passed. If it is you can bet that somehow the cost will be passed on to the customers who will pay for protecting their own information.
I think this is a good thing but I also think it is a bad thing.
Its good because a lot of large companies pay lip service to Information Security and don't take it seriously enough. This will make sure that they do. It is good because it is not the poor customer who takes the risk when he does his shopping.
Its bad because it attacks companies for essentially being victims of crime. Not does the company suffer from the crime itself but it suffers from the after effects of the crime.
On the other hand, (I think we are up to 3 by now) there is always a risk in doing business and especially a risk of crime, it has just moved online now. Companies make good profits or else they would not be doing what they are doing so they need to offset some profits into protecting themselves and their customers' information from the criminals rather than ignoring the issues and pushing the risk onto the very customers that give them money.
I guess its kinda like me locking my expensive car and keeping the keys in my pocket but borrowing a friend's cheap car and leaving it unlocked and motor running in the street because, hey, its not my car.
This law is receiving strong opposition but I think it will be passed. If it is you can bet that somehow the cost will be passed on to the customers who will pay for protecting their own information.
The Wall Street Journal Followup
Since my posting on the 7th, the Wall Street Journal has posted a follow-up article here
It is by the same author who obviously was not aware of my post because she gets most of it wrong again. She chose to ignore Andy's input too. I found out about this follow-up from his Blog,thank you Andy.
My original post basically pointed out the main problem in her article which is that the Information Security policies that she is showing how to bypass are not made up by IT but by the security department. More to the point, they are signed off by upper management and by breaking them you can get into serious trouble with the Boss. Failing that the Boss himself may get into serious trouble with the law.
The author writes in this article about how "IT workers said they get blamed both by employees who feel too restricted and by company executives who, when things go wrong, fume that policies must not have been restrictive enough."
At the end of the day its not the It Guys who should be enforcing security, they have enough on their plates. It is business people themselves who should be enforcing the rules.
The IT department is usually the least respected department, it hires young people who don't know the art of dealing with people, especially those in upper management. More importantly - they are enablers. They fix things and make things work and that is how they are rated. They are also clueless (or they should be anyhow) about what information is important anyhow.
What about the fools in the Information Security department I hear you ask. They are there to make sure that Information Security is done, yes. But, at the end of the day neither them nor the IT guys will be in big trouble if Information is lost or leaked. Or wrong decisions are made using altered documents. It will be Business that pays. So, why have these lazy Information Security guys around in the first place? Really, its to inform the business people and to help them with implementing security.
If your staff are knowingly breaking rules that you have put in place... well... no Firewall, IDS or Antivirus or amazing CISSP is going to save your data.
I think that the WSJ has missed an opportunity to push the idea that Information Security is important and that the rules are there for a reason and that breaking them will not only upset the guys in IT but can make an employee lose the respect of his/her employers and possibly even his/her job.
It is by the same author who obviously was not aware of my post because she gets most of it wrong again. She chose to ignore Andy's input too. I found out about this follow-up from his Blog,thank you Andy.
My original post basically pointed out the main problem in her article which is that the Information Security policies that she is showing how to bypass are not made up by IT but by the security department. More to the point, they are signed off by upper management and by breaking them you can get into serious trouble with the Boss. Failing that the Boss himself may get into serious trouble with the law.
The author writes in this article about how "IT workers said they get blamed both by employees who feel too restricted and by company executives who, when things go wrong, fume that policies must not have been restrictive enough."
At the end of the day its not the It Guys who should be enforcing security, they have enough on their plates. It is business people themselves who should be enforcing the rules.
The IT department is usually the least respected department, it hires young people who don't know the art of dealing with people, especially those in upper management. More importantly - they are enablers. They fix things and make things work and that is how they are rated. They are also clueless (or they should be anyhow) about what information is important anyhow.
What about the fools in the Information Security department I hear you ask. They are there to make sure that Information Security is done, yes. But, at the end of the day neither them nor the IT guys will be in big trouble if Information is lost or leaked. Or wrong decisions are made using altered documents. It will be Business that pays. So, why have these lazy Information Security guys around in the first place? Really, its to inform the business people and to help them with implementing security.
If your staff are knowingly breaking rules that you have put in place... well... no Firewall, IDS or Antivirus or amazing CISSP is going to save your data.
I think that the WSJ has missed an opportunity to push the idea that Information Security is important and that the rules are there for a reason and that breaking them will not only upset the guys in IT but can make an employee lose the respect of his/her employers and possibly even his/her job.
Monday, August 13, 2007
3rd Party Security - The big question
As happens in the "Blogworld" I read a blurb in the Daily Incite which then linked to a good Blog entry by Andy It Guy which in turn linked to a really good PDF document by Rebecca Herold who has more letters after her name than in her name.
While we are so busy concentrating on our own security structures (You are, aren't you?) how do we make sure that our partners are protecting our data?
There are several places where this is important
The PCI standard came out of a need to protect data but there should be a broader standard for all types of data allowing us to make spot decisions on who to trust and who not to trust with our data.
And, taking an observation from Andy but broadening it: the specification of how data is looked after should be more specific than a framework. A framework is fine for protecting your own data, but other people should be able to judge exactly how you treat their data.
But, on the other hand, you don't exactly want to go around to every company that you deal with (perhaps all over the world) investigating in minute detail exactly what methods they use to protect their network and data. You can't be expected to watch that none of their staff take their laptops home etc.
You shouldn't even be expected to take a look at their policies.
You should just want to be able to see a logo that says "we are secure up to the level 3 of the "3rd party information control standard (3pics)". This should be good enough for a bank but a video shop may be able to get away with level 2 and a doctor should have level 4.
By the way, I made up 3pics because, as far as I can see, there is no widely accepted standard with clearly defined levels that the man in the street can trust and be used to (except PCI and that is for credit card information only). But shouldn't there be? Wouldn't it be nice to be able to trust that a company you are about to deal with is going to treat your information the same way you do?
Rebecca's PDF document (linked to above) goes into great detail about how one can manage personal information that is given to 3rd parties but it is a lot of work and is fine for companies who have few partners but when there are many partners it would be nice to be able to just check their "3pics" compliance level and start dealing with them.
In case you argue that it is possible already using ISO, SOX etc, then read what Andy said in his article about how they are just frameworks and not generally accepted standards.
What we need is someone (who me? I'm too busy ;) to create a(n auditable) standard with a few levels that are easy to understand and implement. And for companies to use the standard and brag about their level of security.
I think part of my thinking comes from discovering this week (but not being rich enough to follow through with actually buying and reading) a book by Stephen Covey (Jnr) about how once trust is established, business can proceed quickly. It is up to us as the public to demand that companies show how they can be trusted with our private information. It is up to us Information Security specialists to make it easy for them to do it.
While we are so busy concentrating on our own security structures (You are, aren't you?) how do we make sure that our partners are protecting our data?
There are several places where this is important
- The obvious first one: you give your credit card information to someone. What they can and can't do with it is governed by a standard made by the credit card companies. It is called PCI compliance. It seems most companies don't abide by the rules but the fact is that the rules have been very well designed and slowly, hopefully, companies will abide by them. The nice thing is that PCI complience is worked out already. You don't have to worry. You should as a matter of principal make sure that a company is PCI compliant. I think it would be a good idea for the credit card guys (visa, mastercard, etc) to actually promote PCI compliance as a marketing tool for companies to diplay proudly on their websites and in their stores.
- You fill in a form, any form, anywhere, online or offline. This is your personal, private information and you should be aware exactly what happens with it. If you have to give the information across for some law such as the ones preventing money laundering, you don't want that form going to the company's marketing department. ("You are a treasured customer of ours, do you want to be the first to use our new services...?") You also don't want it put into a dustbin and used by anyone who finds it in the street. '
- You are trusted with someone's details and have to send them to a 3rd party. If something happens to the details - its you to blame.
The PCI standard came out of a need to protect data but there should be a broader standard for all types of data allowing us to make spot decisions on who to trust and who not to trust with our data.
And, taking an observation from Andy but broadening it: the specification of how data is looked after should be more specific than a framework. A framework is fine for protecting your own data, but other people should be able to judge exactly how you treat their data.
But, on the other hand, you don't exactly want to go around to every company that you deal with (perhaps all over the world) investigating in minute detail exactly what methods they use to protect their network and data. You can't be expected to watch that none of their staff take their laptops home etc.
You shouldn't even be expected to take a look at their policies.
You should just want to be able to see a logo that says "we are secure up to the level 3 of the "3rd party information control standard (3pics)". This should be good enough for a bank but a video shop may be able to get away with level 2 and a doctor should have level 4.
By the way, I made up 3pics because, as far as I can see, there is no widely accepted standard with clearly defined levels that the man in the street can trust and be used to (except PCI and that is for credit card information only). But shouldn't there be? Wouldn't it be nice to be able to trust that a company you are about to deal with is going to treat your information the same way you do?
Rebecca's PDF document (linked to above) goes into great detail about how one can manage personal information that is given to 3rd parties but it is a lot of work and is fine for companies who have few partners but when there are many partners it would be nice to be able to just check their "3pics" compliance level and start dealing with them.
In case you argue that it is possible already using ISO, SOX etc, then read what Andy said in his article about how they are just frameworks and not generally accepted standards.
What we need is someone (who me? I'm too busy ;) to create a(n auditable) standard with a few levels that are easy to understand and implement. And for companies to use the standard and brag about their level of security.
I think part of my thinking comes from discovering this week (but not being rich enough to follow through with actually buying and reading) a book by Stephen Covey (Jnr) about how once trust is established, business can proceed quickly. It is up to us as the public to demand that companies show how they can be trusted with our private information. It is up to us Information Security specialists to make it easy for them to do it.
Tuesday, August 7, 2007
The Wall Street Journal only got one (major) thing wrong.
The Wall Street Journal, published by Dow Jones & Company published an article that had a few of my peers quite upset.
Particularly upset was my brother-from-another-mother Andy the IT Guy. I call him that because although we are thousands of miles apart we have similar jobs and usually see eye to eye on matters. His post on the issue is here. In the post he links to other bloggers who rip the article to shreds.
I leave it up to the dedicated reader to follow all the links and get acquainted with the article and see why it has upset Andy and several others. Go do that now...I'll wait...
...
If you are reading this I hope you clicked the above links and read up on the issue...here comes my 2c.
The article got it exactly right except for 1 major issue and it is in the title!
It is not the IT department that is trying to stop you doing all of those things, it is the security department.
In fact, in most companies if you are quick (and you have to be quick) you'll see that IT guys are the guys who break the rules the most. Find the geek with the long black coat and chances are he is the guy running the phantom MP3 server that everyone knows about but doesn't exist.
Now that that is cleared up, you may ask: so what? Information Security department...IT department..who cares? But it does make a difference. IT has a mandate from Management to keep the servers humming and the information flowing - thats their job in a nutshell.
Information Security has a mandate from Management to make sure that the company does not leak information and does not break the law. The Information Security guys are also not the ones who make the rules, they may make suggestions but the guys who sign off the policies and rules are Management (read: your boss, his boss, etc etc up to the CEO). The rules you are breaking are the rules set down not by IT but by your boss.
Some of the rules (such as rules 1,2,3) are actually made to stop the top level guys from going to jail or at least to stop the company from being at the receiving end of some expensive legal problems. You can be sure that they would not take kindly at having these rules broken.
Obviously I am all for freedom of the press but just know who sets the rules and who signs off on them - its not IT.
Friday, August 3, 2007
Can your business survive without petrol?
So, yet another strike and another risk to your business.
It felt like I was in Zimbabwe this morning. I had to queue for Petrol. I'm not saying it is as bad as Zims because the queue was only 5 cars long and there was petrol available when I got my turn. I did put in more than I usually do.
I was lucky because I take LRP; the station I went to had no unleaded.
If you have a large corporation what would you do if 70% of your staff are unable to travel into work every day? Can they work from home? Can your VPN handle the load? Do you know your business well enough to work out who should come in to work, who should dial up or connect over VPN and who should just take a few days off?
If you have a small business can you afford for your staff not to come in and to do their work from home. Can you afford your client/customers not to come visit you? Can you afford not to visit them?
One of the aspects of Information Security is availability and most large companies have a plan for Disasters (note the capital - we are talking floods and earthquakes) but not for small issues like lack of fuel. Most small businesses run on gut feel - they will deal with that bridge when they come to it. The bridge is now here and it is Business Continuity.
The most difficult thing with Business Continuity is that it forces us to take a look at our assumptions. We assume that we can buy petrol whenever we want to get us in to work. We assume that while there we can have access to water, food, toilets, electricity, fairly comfortable working environment (Goldilock's not too hot and not too cold), email, our data, the telephone network, etc, etc. Business Continuity is basically the process of saying "what if something is missing" and anyone can do it. Usually the owner or the business people are the best at doing it because they understand the business and how it works.
It can get a little more complicated when multiple things are not available.This is very likely for many businesses at the moment. If you have no electricity and no diesel for your generator, what can you do? Work from another site where there is electricity, but then chances are you will be using more fuel to get there.
Is it worth making your staff come in later to avoid rush hour in the hopes that their petrol will last longer? The humane aspect also comes into this issue in that if the strike lasts long and petrol is scarce will you let your staff save their petrol for family emergencies?
The strike is 3 days on and the negotiations are happening. Hopefully there will be no issues at all except some minor inconvenience and some bad Zimbabwe comparisons. I will then take off my Chicken Little hat but in the mean time: don't panic but have a plan.
Wednesday, August 1, 2007
Facebook privacy... I'm sure it was there a second ago..
So, some people I know were bored yesterday, looking for something to do while FaceBook got its act together. The site was down and productivity worldwide picked up.
But worse than that, according to an article in The Times, Facebook also let private information slip.
I love it when Information security makes the general news: it get people thinking about Information Security.
Basically, you could read your friend's private facebook messages and see their private content even though they had set it not to be shared.
Wow, you must think, Facebook's lawyers must be sweating... not quite.. in their privacy agreement is this little nugget:
"You post User Content (as defined in the Facebook Terms of Use) on the Site at your own risk"
Maybe you should reconsider what information is shared on a publicly accessible site.
Subscribe to:
Posts (Atom)