No Expert, No Cry

Why you shouldn’t trust (awareness) experts, what should you trust instead, and my new year resolution.

By Eh’den Biber

(see the end of the post with the update…)

Prologue – SANS

During the SANS European awareness summit, I’ve ended up in an interesting debate on twitter with one of the attendees (John Scott). The debate was on the observation I made that science was not part of the agenda in this major awareness summit. There was not a single scientist on stage to talk about their breakthrough research, and none of the tweets about the event (#SecAwareSummit) included any science in them.

My observations didn’t go that well with John, who seems to have taken it a bit personal. To show me I was  wrong he mentioned that Jessica Barker gave a talk. Yes, she did, and yes: she’s a (civil design) doctor, and I barely finished Kindergarten.

When SANS finally posted the slides from the event (including the workshops that occurred before), it seems that the only one who provided external references in their slides was Jess (well done). She mentioned 5 academic papers (from 1996, 1999, 2008, 2008, 2009), one reference to TED talk (2012) and one book (2017). Only one of the research mentioned was focused on information security (2009, Self-efficacy in information security: Its influence on end users’ information security practice behaviour), it used social cognitive theory, and the results suggested that simply listing what not to do and penalties associated with a wrong doing in the users’ information security policy alone will have a limited impact on effective implementation of security measures.

I’ll let Iago express my feelings about that one:

Show Me the Science

If we wish to change behaviour we need to be able to measure it. What to measure, when to measure, where to measure, how to measure – all of these are elements that the scientific method been the most effective approach we have as humans so far.

Yet when it comes to awareness to infosec and privacy we seem to be totally ignoring science. The SANS summit is just one sad reflection. Very few, if any, vendors of “information security awareness training” material or services will provide deep details of the scientific approach they have used to develop their solutions, not to mention any conversation on an evidence. Is there any double blind, placebo-controlled trial that shows the effectiveness of one method over the other? Not that I know of.

In many ways, it’s astonishing. We are in 2018, for Christ’s sake! Awareness to security and privacy elements of information systems is important to many technologies dependent societies, as well as companies. If that’s how much science is involved, no wonder nothing really changes.

The Single Most Important Measurement in Awareness Training

The book “How to Measure Anything in Cybersecurity Risk”, by Douglas W. Hubbard & Richard Seiersen, was written to explain why the current methods used by most organizations to measure cyber security risk are not fit for purpose, and suggest a quantified risk management approach. Chapter 4 in the book was originally called “The Single Most Important Measurement in Cybersecurity”, and I will follow its structure to talk about the single most important measurement in awareness training.

First point – awareness training matters.

Doing awareness training just to be “compliant” is simply insufficient these days. Having the ability to provide proof employees have passed awareness training will not protect your organization from the current risk landscape. Take, for example, GDPR. Without awareness to information security and privacy across all stakeholders, organizations cannot achieve “privacy by design” and “security by design.”. As such, their risk level is increased: it will increase the magnitude of a future loss event in case of a breach due to increased secondary risk (regulator) caused by GDPR lack of compliance. The checkbox days in which organizations only had to prove they have records that employees did a CBT on information security and privacy are over. Organization that has high levels of awareness to information security and privacy will have lower risk of regulatory related actions, and will outperform organizations that only perform awareness activities for compliance reasons.

But how do you know which method of awareness training works? What do you measure? Is it possible that your awareness training doesn’t work at all? Even more importantly, how can you measure the awareness training method itself?

“We often hear that a given method is ‘proven’ and is a “best practice.” It may be touted as a “rigorous” and “formal” method—implying that this is adequate reason to believe that it improves estimates and decisions…Some satisfied users will even provide testimonials to the method’s effectiveness. But how often are these claims ever based on actual measurements of the method’s performance?”(How to Measure Anything in Cybersecurity Risk, Douglas W. Hubbard & Richard Seiersen)

If your organization is using an awareness training method that can’t show meaningful measurable improvement, or, even worse, makes awareness levels to drop down, then the method itself becomes the single biggest risk related to awareness, and improving the method will be the single most important awareness activity’s priority.

We need to find either a measurement that already been proven to work, or if we don’t have one, we need to propose a measurement that will allow us to identify a good awareness training method, as well as what we measurements we shouldn’t be measuring.

Regardless of the method you currently use to educate people about information security and privacy, the question you must ask yourself first is: does it work, and how do I measure its success?
How can you tell if whatever baseline you measured at first was the right baseline to measure, and how can you tell if your measurements were accurate? Take, for example, the typical “phishing” exercise so many organizations tend to use as part of their baseline analysis – what exactly are you measuring there? If someone didn’t click on a phishing email, does it mean that they will not click on the next phishing email if they are checking their email on their mobile phone? If someone reported on phishing email, does it mean he will design information systems which will follow the “privacy by design” principles?

The Awareness Placebo

Meet the “analysis placebo,” or the “overconfidence effect”—the feeling that some analytical method has improved decisions and estimates even when it has not. There are numerous studies in various fields, which showed that when someone is engaged in training it leads to improved level of confidence but not to an actual performance improvement. Here is one example: a 1999 study had subjects which some of the participants were trained in lie detection, and the others didn’t. When both groups were given video tapes of investigations the group who was trained in lie detection had more confidence about their lie detection skills vs. the other group. However, they actually performed worse than the untrained subjects. Another study showed that clinical psychologists became more confident in their diagnosis and their prognosis for various risky behaviours by gathering more information about patients, even though the patient observed outcomes of behaviours did not actually improve.

If you work on the field of awareness training, the fact you are exposed relatively more information on the subject than others will not make you an expert, or improve your ability to decide if the awareness training you choose will be able to deliver what it promised. Actually, even calling “awareness placebo” is wrong – in the field of medicine placebo medication has shown positive effects on patients who took it believing it will help them, while “awareness placebo” has zero benefits for the state of awareness, and in fact reduces it. Remember the phishing exercise I mentioned before? The fact someone successfully detected a phishing email can actually make that person act in a less secure way, because it might increase his perception of good judgement about information security and privacy, when, in fact, might be no real improvement.

In Science We Trust

The take-home message is – do not rely on “experts” just because of their credentials, or experience. If you are the one in charge of delivering awareness training – remember your biases, study them, always insist to use reason and evidence to reach a conclusion about awareness training methods and their capabilities. In other words – use critical thinking, ask for the scientific methods behind the awareness training, challenge the numbers, and never trust your own perception, as you are most likely unaware of your own biases that blind you.

Which bring me to my new year resolution:

International Cyber Security and Privacy Awareness Coalition (ICSPAC)

Cyber security and privacy awareness training should provide measurable increase in people’s ability to act and react correctly with regards to information security and privacy-related decisions and actions, and maintain that ability for a pre-defined window of time across different states of being.

The challenge is that the current platforms used by “awareness experts” to share and exchange their work are not provided by an objective body. They are either being provided by vendors (e.g. SANS) who have their own methodology, or by information security and privacy professional bodies, that are biased due to their inner politics, or by governments, who have no clue what is awareness is.

Since I’ve written extensively on the subject of awareness (see reference below), and since science is continuously being ignored I decided to be the founder of a new organization, called International Cyber Security and Privacy Awareness Coalition (ICSPAC). This will be an open, non-partisan, non-profit organization that aims to educate policy makers, organizations, professionals and the public about conclusive science which can be used to improve the level of awareness in the fields of cyber security and privacy awareness / culture, and where science is absent, to encourage additional research.

Please join if:

  • You wish to found out what can be an effective, evidence-based approaches to cyber security and privacy awareness training.
  • You wish your awareness related metrics to deliver meaningful indication to the state of cyber security and privacy awareness training.
  • Encourage vendor/political/professional bodies agnostic conversations and debates.
  • In my next article, I will provide some science-based insights to awareness training, some of which might be surprising.

Everything gonna be alright…

References

I published about 35 articles related to the subject of awareness and culture. Here is the list:

2011

  1. Collective Corporate Judgement – suggestion to tackle social network risk is by a concept I will call collective corporate judgement.
  2. Killing Social Engineering – talking about human manipulation as a neurological phenomena.
  3. Amygdalala-land – understand the neurological limitations and advantages (of) our human brain.
  4. Play Dead – “helping your user and friends’ community can only be done if you find a way to empower them, not scare them to death.”
  5. The Metrics – biological, biochemical and neurological examples why people might say they will behave responsible and they will believe it – but will not act responsible
  6. Men without hats are living on the edge – How to solve the Clash between ethics, personal integrity, “the system” and hacking?

2012

  1. Antifragility and the year of the cut – embracing the randomness, chaos and uncertainty of hackers as a survival strategy in these uncertain times.
  2. Failwareness – example when focusing on accountability and standards lead to low awareness.
  3. Social engineering in the 21st century – the lost videos…

2013

  1. Awareness vs. Consciousness – Why “awareness” training fails and the role of consciousness in our lives.
  2. Suicidal Consciousness – how stress “kills” Stress kills “conscious” behaviour.
  3. Don’t professionalize, innovatize – on the difference between “scientists” and “technicians”.
  4. Pray We See – The problem of privacy education.

2014

  1. Personal message to the information security awareness community
  2. The Desolation of Awareness – 1 – The Art of Noticing – why awareness is not as straightforward as most of us perceive it to be.
  3. The Desolation of Awareness – 2 – Making SenseIs there an information security sense like there is a sense of smell? Can we evaluate it? Why our normal definition of information security prevents us from reaching awareness?
  4. The Desolation of Awareness – 3 – One Sense to Rule Them All – What do the colour blue and information security have in common? The fascinating world of the mind.
  5. The Desolation of Awareness – 4 – Buddha Was a Hacker – The root of all problems, Baron Münchhausen, why “no” fails, and why Buddha was a hacker.
  6. The F word – Part 1 – FORGIVENESS – According to neuroscience both self-criticism and criticism of others bring lack of awareness. Forgiveness and compassion
  7. The Invincible Warrior – An awareness tale (or a tale of awareness)

2015

  1. The Awareness Pseudoscience – Moving from benchmarking to baselining.
  2. The Technology Insanity – Why technology is not the solution to lack of awareness.
  3. Dancing with Faust – The hidden cost behind technology addiction, the knowledge culture, and the abandoning of wisdom.
  4. The Corporate Book of the Dead – What will you do when your organisation be annihilated by a cyber-attack?
  5. Why corporations don’t get cyber, or: Cyber, The Supreme Understanding – the supreme understanding why corporations struggle with cyber, and why it is so hard to find a CEO (and board members) who understand cyber.

2016

  1. The Cyber Minority Report: Gender Affairs – Investigating the evolutionary relationship between women, information, and security, via the prism of the red queen hypothesis.
  2. Mr Big (Data) – Why big data and analytics are sexy, and why only awareness can secure them.
  3. EU: The Post Mortem Edition – ANALYSIS: How lack of awareness led us to BREXIT, and what can we do next?
  4. Breaking The Iceberg – What the US election tells us about the lack of awareness we live in – and how it all relates to information security.
  5. When a Muslim met a Jew (the X-rated edition) – Why our inability to grasp the state of the others is in the root of our failure.

2017

  1. Awareness Myth Busting – Why attempts to raise the level of awareness to information security are failing, and what to do in order to change it.
  2. The Revolution – How I became part of an invisible hacking revolution.
  3. GDPR “Unknown Unknowns” – The art of privacy, and why what you don’t know (about the GDPR) WILL kill you.
  4. #Cyberblind – Why salaries and job ads are superb indicators to your organisation cyber security maturity, how it can be improved, and why your organisation won’t do anything to fix it.
  5. Uber and Under the Breach – Everything you need to know about the Uber data breach, and much more on Uber culture.

 

Presentation given by Jess (Thanks Alain Griffen‍!)

This is NOT the presentation given in the SANS event, but based on the references it should be very close to it…

Let’s break down one example given:

“This seems to support that the fact that the very existence of a stereotype puts pressure on individuals who are the subject of that stereotype, to mean that they don’t perform as well. So obviously this has connotation in all sorts of different groups, for example women and ethnic minorities when it comes to cyber security”.

So, is there a problem with that statement?

Now, first of all, let us look at the original research (Stereotype Threat and Women’s Math Performance, 1999):

First thing to notice was the fact that in the study (study 2 in the paper), what they did was to take the difficult test used in Study 1, which now was divided into two halves, and participants were given 15 min to complete each half. 30 women, 24 men.

Half of the participants were told that the first test was one on which there were gender differences and that the second test was one on which there were no gender differences. The other half were told the opposite, that the first test was one for which there were no gender differences and that the second test was one on which there were gender differences.
What Jess didn’t say was the fact that both men and women performed equally in the second half of the test, regardless of their gender or if they were told this part showed difference or not.

So what have happened to the “stereotype threat”? Did it diminish? What about the responses – did women caught up in their accuracy?

The conclusion of the researchers for this study is …

We believe that by presenting the test as one on which gender differences do not occur, we made the stereotype of women’s math inability irrelevant to interpreting their performance on the test—this particular test.

I will come back to that in a second. Before that, I wish to mention that there was also a third study, because they realised the second study was conducted in a way that might influenced the results (in the first half). I have huge reservation on that study, the whole design is IMHO makes it a felony to call it science. Why? Because it asked the following questions from all participants:

  • If I do poorly on this test, people will look down on me;
  • People will think I have less ability if I do not do well on this test;
  • If I don’t do well on this test, others may question my ability;
  • People will look down on me if I do not perform well on this test
  • I am uncertain I have the mathematical knowledge to do well on this test
  • I can handle this test
  • I am concerned about whether I have enough mathematical ability to do well on the test
  • Taking the test could make me doubt my knowledge of math;
  • I doubt I have the mathematical ability to do well on the test

Why it is a felony? Because there is something called “priming” and “negativity bias” (also known as the negativity effect), which is that even when of equal intensity, things of a more negative nature (e.g. unpleasant thoughts, emotions, or social interactions; harmful/traumatic events) have a greater effect on one’s psychological state and processes than do neutral or positive things.,

Take a look above – only ONE question was positive.

Combine that with the “big 5 personality traits”, where women consistently report higher Neuroticism, agreeableness, warmth (an extraversion facet[70]) and openness to feelings, and men often report higher assertiveness (a facet of extraversion[70]) and openness to ideas as assessed by the NEO-PI-R.[71] Gender differences in personality traits are largest in prosperous, healthy, and egalitarian cultures in which women have more opportunities that are equal to those of men.

If you know women have higher Neuroticism (and it’s been known for a long time), you don’t frame them with negative statements before they are supposed to perform a test!

Sadly, most of the science that is performed as part of the gender studies is not really a science. There are biological as well as social reasons to the fact women performed with low scores. The biological is a subject for a whole bigger discussion, which I have no time to add here (but will do later).

The social reasons can be related to the culture at home, rather than the education systems or what “society” thinks of you. In “Culture and Achievement” (city journal, of The Manhattan Institute for Policy Research), the author demonstrated how families shape their children’s prospects more profoundly than anything government can do.

Last but not least, let’s go back to the first statement of Jess:

This seems to support that the fact that the very existence of a stereotype puts pressure on individuals who are the subject of that stereotype, to mean that they don’t perform as well. So obviously this has connotation in all sorts of different groups, for example women and ethnic minorities when it comes to cyber security“.

There was one small ethnic minority group which its members won 22.5% of Nobel prices. The fact they won so many prices should have made the whole claim of “Stereotype Threat” go to waste. They are called Ashkenazi Jews.

ICSPAC, and GDPR

Since I decided to create a place which people could share ideas, I also had to follow GDPR, which mean to make sure it will have “privacy by design” and “security by design”. So the site I’m creating is under development. The prototype seems very promising so far 🙂

 

© All rights reserved 2018

Advertisements

Uber and Under the Breach

Everything you need to know about the Uber data breach, Why Uber is the Chris Brown of the cyber economy, and much more…

[Updated 23th Nov 2017 – see “Cover-up?” Section + afterthoughts]

Sleep

Darn, I really wanted to sleep, I really did! I had to work on something till late tonight, already got total upset by 4pm, and when I finally ended it near midnight, I checked twitter and darn, Uber been hacked. “What the heck, they fired Joe Sallivat, their head of Information security and Craig Clark, (the?) director of legal? Wow, I must write about it”. Luckily tomorrow I need to wake up early then usual. Darn lucky.

But this is important.

Flashback – I think it’s 2013. I’m speaking with Alex Hutton during a BruCON break. At some point Alex tells me something, that for some reason got engraved in my mind forever: “If you’re will not know how to measure risk and communicate it to the board you will not be CISO for long.”

Darn right.

So here is what we know, according to Bloomberg:

What happened:

Hackers stole the personal data of 57 million customers and drivers. Compromised data from the October 2016 attack included names, email addresses and phone numbers of 50 million Uber riders. Plus, information of 7 million drivers, including some 600,000 U.S. driver’s license numbers. Uber paid $100K to get the data erased.

How it happened:

Two attackers accessed a private GitHub coding site used by Uber software engineers and then used login credentials they obtained there to access data stored on an Amazon Web Services account that handled computing tasks for the company. From there, the hackers discovered an archive of rider and driver information. Later, they emailed Uber asking for money, according to the company.

Uber? No way!

This data breach is NOT Equifax redux. Uber is a totally different bread of a company. it’s a market breaker, it’s cutting edge in technology, it’s DevOps, it’s containers technology and microservices, it’s cloud and buckets, it’s all the things that most senior management in most companies consider as “buzz words”, because they don’t understand anything about it. These are not buzzwords. These are technologies that can kill organisations, might make board members lose their jobs, and almost certain cause senior infosec and privacy people to lose their jobs, and not only senior.

The CISO

The CSO, Joe Sullivan, worked previously at PayPal, eBay, was head of security for Facebook, and surprise, surprise – he knows a lot of red team tricks, which he used throughout his stay in Uber. All the privacy violation programs Uber were running, been “spearheaded” by Joe. Uber was very aggressive in its offensive infosec ops. Obviously that focus ignored defensive security, which led to the data breach.

The Board

Obviously Joe didn’t pay 100K from his own pockets, as the article clearly states “Uber paid”. The article states that Joe Sullivan spearheaded the response to the hack last year. As an ex CISO (bank), if this is not a subject of discussion for a company board, I don’t know what a subject of discussion for a company board should look like. No way this was not under the board discussion, it must have been the CEO, and CIO (CTO), and Legal, and finance, and operations. If not, Uber have a horrible management organisation, with no real governance in place. Which they obviously had till recently. As a reminder, just two months ago Uber agreed to 20 years of privacy audits to settle FTC charges.

Cover-up?

A Routers report claimed the following:

A board committee had investigated the breach and concluded that neither Kalanick nor Salle Yoo, Uber’s general counsel at the time, were involved in the cover-up, another person familiar with the issue said. The person did not say when the probe took place.

That is interesting. What could have happened is that two people (according to the report) scanned Uber properties, found the Github repositories, used the credentials, downloaded the information, and then contacted Uber and demanded an award for their actions. I guess that the CSO had Craig Clark authorise for him a $100K payment to these people.

Let’s assume we believe this imaginary story. There are SO many problems with it:

First, I want to say wow on behalf of all other CISOs around the world that we envy a CSO who can throw away $100K. There are enough CISOs out there who don’t even have a budget of $100K.

More importantly – A CSO who can get a $100K payment to individuals without having anyone at finance contacting them asking what the heck is going on? That’s a neon light the sign of the “Hollywood” sign flashing in red with sirens that can be heard by a deaf person. If this is true Uber have no real corporate governance in place, period.

Last point – “a probe” by a board committee – so let me get this thing right – you ask the people who are most likely be framed by an investigation to perform an investigation and you want us to accept the results? Thanks, but I don’t buy it. If the boot fits…

Data Privacy

Which brings us to data privacy. The article states “Uber riders around the world”. Let me guess, if I will say “Voulez-vous coucher avec moi?” there is a good chance there some of the people who were impacted by this hack as “riders” will be from a certain European country. Does this mean multiple notification to multi countries?

Lessons & questions to take home

Heads first:

First of all, it’s a reminder. It’s a reminder that what Alex Hutton said to me a few years ago is true to all of us who work and/or manage information security or data privacy. It’s a sharp reminder that our heads can find themselves in a guillotine basket if, sorry, when a breach occurs.

Survival is Defensive: see my previous post, and scroll down to see the video of Jordan Peterson. Nature survival is based on defensive of the known and moving along the unknown path of life. This is why we are wired to react when we see a snake, not when our prefrontal cortex has finished processing to decide if it’s a snake or a wooden stick. Life is an art of staying where you should, not over protective, and not over offensive.

Smart can be your Achilles’ heel: Joe Sallivat seems to be good in what he does, the dude was on aNIST commission on enhancing national cybersecurity, advising to the president! Based on what I had the chance to look at, the guy is most likely smarter than most of the people who will read this post, and for certain smarter than the person who is writing this post. If this guy would have invested more in defensive rather than aggressive red team he might be able to prevent this stupid data leakage from occurring. Smart does not mean wise. Which brings me to the next point…

Risk: I don’t know if Joe know about Quant Risk, I guess he must know it. Most of the really smart people I meet knows about Quantitative risk management, such as FAIR. FAIR is the future right now – the big four are looking into it, RSA is working with RiskLens on it, so if you don’t do quant risk, it’s time to wake up and smell the auditors. If you need to measure cyber risk, please start to plan decommissioning your risk heat maps. They are useless in measuring cyber risks.

DevOps: I hear some of you think out loud “I told you, DevOps means no security”. Not true, but also true. How did the two got access to the private GitHub repository? If they had security in place this would not have happened, but when speed is more important than anything else, and security is busy on offensive, the back is left vulnerable.

GDPR: Can any of us imagine how a data breach as such relating to EU citizens will look like in 2019? Well, I actually can, but this will be the topic for a totally other article, it’s already too late.

Awareness: oh, so much to write here, but will keep it to a new … talk perhaps 😊

Never underestimate

Uber is a very unique company. It decided to play as if regulations and laws don’t apply on them, and they were the best and the worst in many ways. It has a huge customer base, and a huge of explaining to do. Some rules and regulations are important. I don’t want to live in a dystopian reality where people work as slaves but are being called “independent drivers”. If there is a valid business model that is not violating the ethical and moral codes of our society I will be happy to support it. If not, unless it changes, I will stop using it. Never underestimate the determination of a tired information security professional…

Afterthoughts

The more I was thinking about it, the more it feels that Uber is the Chris Brown of the cyber economy. The same way Chris was kicking the living hell out of Rihanna, Uber been molesting us, our privacy, our laws, but we don’t do anything about it. Would we stay in a relationship with it until we will look like this?

Sure feels like that.

OK. enough said. Let the music play…

Eh’den

© All rights reserved, 2017

#CyberBlind

Ridiculous information security salaries are the symptom of a bigger problem. Why salaries and job ads are superb indicators to your organisation cyber security maturity, how it can be improved, and why your organisation won’t do anything to fix it.

By Eh’den Biber

October been an extremely hectic month for me. It’s been a while since I travelled and worked in so many countries, that at some point I slept in 5 different places during one week. Amazing and exhausting at the same time, see post photo which was taken along the way.

When I came back, I decided to see if I can identify any shift in the job market, to see if I can make my wife happier by finding a role which doesn’t requires me to travel so much. Sadly, the results are grim.

Over the years I’ve developed a sort of a mentalist skill, and after 5 minutes into the job interview I could already tell the interviewer things I shouldn’t have known, such as the fact they recently experienced a severe breach, auditors’ blues, or simply someone just left in a hurry.

This brings us to the question – why? How come the responsibility and accountability of a person who takes such a role is not being rewarded in the right manner?

HR

HR in most cases have no clue about the role they asked to recruit for, and yet they are supposed to filter for the hiring manager. They then subcontract the hiring to a group of agencies, some of which have no clue what they are hiring. I’ve been asked recently by a recruitment agency manager “What is a CISO?”. Enough said.

Take home message to hiring manager: Speak with the recruitment agencies, ask for recent references, meet them, or use the ones you trust.

Job Description

What is your role? What are you supposed to do (objectives)? What are the current KPIs you need to maintain or contribute to? These are basic elements that needs to be part of a job description, yet companies sometimes have such unclear job description which makes you wonder how they could have any metrics for success. Others have a role description that makes you wonder how many FTEs are expected to perform all the tasks mentioned in the ad, only to realise that there will be one FTE and that is going to be you, if you will want to join the madhouse. Once I received a job description that spread over four condensed pages, and when I asked if they identified what are the short-medium-long term priorities I’ve been told that all of the tasks are high priorities, and all are required to be done by me.

Last but not least – most of the managerial roles don’t mention the budget you are in charge of, and when you ask what is it (I did) you don’t get an answer. How can you estimate if you’re going to be able to fulfil your role if you don’t know how much budget you got?

Take home message to hiring manager: Specify a role that include in it things like role purpose, financial responsibility (at least to be shared at final stage of interview), direct reports, role objectives, KPIs needs to contribute and deliver, qualification, skills/knowledge, experience etc.

Role Objectives

I’ve mentioned role objectives in the previous one, but we must speak about this because it is where the shit hit the fan in many cases. If you are hiring someone and you want to him to succeed, the role objectives should be related to the organizational objectives. However, even when that occurs (rarely), in most cases it’s a pseudo relation, because in most organisations employees’ performance measurement is a joke. Most of my career I’ve been asked to provide my yearly objectives before I received my manager objectives, and that’s because he didn’t receive it from his manager. If your manager performance is not measured correctly, how any measurement of you will mean anything? And if you want to hire a person, how can you hire the right person if you don’t know how to measure him?

Take home message to hiring manager: If you don’t have the ability to map the role objectives of the person you wish to hire into your objectives and the org objectives, perhaps you should work on it before hiring anyone.

Infantile Risk Maturity

This leads us to a much bigger issue, which I can summarize in ten simple words:

Organizations do not know how to measure cyber security risks.

Let me break it down to you:

  • Organizations
  • Do not know
  • How to measure
  • Cyber security risks

Here’s an example – remember the role objectives? If your organization can’t associate the risk reduction that is related with specific role objectives, or god forbids quantify it, how can it really know that the salary which is supposed to be paid to that person is correct?

There is a systemic, cross industry lack of understanding on how to perform risk analysis. It’s also size agnostic – Last year I spoke with a person holding a very high role in an undisclosed large bank. He admitted to me that his bank realised they don’t have the right tools to measure cyber risk in a workshop they did in 2015. Mind you, this occurred in a huge bank, while most organisations didn’t even reach that “A-Ha” moment (Thank you Oprah).

When people don’t know how to measure risk, you can’t be surprised they come up with silly salaries for such risk mitigating focused roles. If you don’t even know how much risk you have, how do you know that the salary that you offer is appropriate?
Take home message to hiring manager: If you can’t quantify the risk reduction that will be associated with the role you wish to fill, don’t be surprised this “finger in the air” measurement method will attract the wrong types of people, and that the salary you offer is too low. Ah, and don’t trust the market average, the same way you don’t trust the advice of a ship full of fools, or ask for direction from a group of blind people.

Auditors

Most organisations have 3 lines of defence – operation, risk, and audit. Your auditors are supposed to provide assurance to the management of the company that it functions as they wish it to be. When it comes to information security, ISACA is the de-facto authority of certifying auditors, and they don’t do their job correctly when it comes to risk measurement, a critical element in the security posture of an organisation. ISACA allows certified auditors to accept point estimates (AKA risk heat maps) as a valid risk measurement. Most of the current risk methodologies are not-fit-for-purpose and should have been long decommissioned, yet here we are, in 2017, and still risk people are allowed to use them.

We (in general) have a systemic bad practice of risk management even though there are alternatives. Jack Jones FAIR (Factor Analysis of Information Risk) was Founded in 2005. It’s been an open standard for years now, yet you see risk heat maps everywhere, rather than probability distributions.

I have many friends who are part of ISACA, I’ve even been a director once. So how come ISACA don’t use its power to push for a change? To understand that, I invite you to see the following video that explains it all. It is called “human motivation and Zebra Camouflage”

 

In short – people are driven by fear of being anxious or in pain, avoid suffering, NOT by the drive to be happy. Change can lead to suffering, hence, people will do anything to “keep what we have” if the new is unknown, and since we been using the same outdated methods for so long people stick to them, and refuse to involve until they are being hit.

Take home message to hiring manager: If you want to see a real change in your cyber recruitment, you must work with stakeholders to change the risk methodology of the organisation, and you should have of sessions with the auditors to see if that is possible. If you see you can’t change the risk methodology, embrace yourself for a breach which you will be blamed at.

Happy Hiring!!!

Eh’den

© All rights reserved, 2018

GDPR “Unknown Unknowns”

The art of privacy, and why what you don’t know (about the GDPR) WILL kill you.

By Eh’den Biber

 

Introduction

A few years ago, I had a colleague that was about to depart on a flight to a lovely vacation with his wife. As the airplane was waiting for the signal to lift off, my colleague wife started to scream. I mean REALLY scream. As my colleague wife had taken many flight before, my colleague had no idea what the fuck is going on (forgive my French). Long story short – airplane went back to the terminal, my colleague and his wife were being taken off the airplane, severe sedatives were used, and instead of a lovely vacation my colleague spent the next few days in a mental institute seeing his loved one going via hell. This whole thing followed a long recovery process, and almost broke him to pieces as well. Continue reading

Fake News

Who are the real hackers, and why most of the news about hackers are fake (snippet from my upcoming talk)…

By Eh’den Biber

Hi everyone

As you might have seen from my previous posts, I’ve been writing a long post called “the revolution”, which covers my journey into finding ways of communicating and connecting with my son, who have severe autism. I was about to post a new update to it – but then I stopped.

You see, in the last two years I’ve been planning to give a talk about the subject of substance abuse in the hackers’ community. This is a topic which has have HUGE implications for anyone who either is a hacker, working with a colleague who is a hacker, employing one, or planning to employ one. The reason the update to “the revolution” was delayed is because substances and their impact on non-ordinary states of consciousness was just too big for a written update.

And the good news is that thanks to it, I’m finally ready to give a talk on the subject. It would be lovely to share it with Peerlyst members, here in London, and will be looking for an event space for it. Also, I plan to share it in upcoming CONs because it’s probably the most interesting topic I’ve researched, and one with huge implications to many people who are reading it right now. Based on my experience, if you are reading it you’re either abuse substances or know a substance abuser. If you have an upcoming CON and wish me to talk on the subject, please contact me directly. I assure you that it’s going to be one of the most interesting talks you will have in your event.

Please share thoughts, comments, and stories either below or, if anonymously, via my secure email account: ehden at protonmail dot com.

 

Eh’den

Fake News

There is an epidemic of “hacker news” that dominates our world in an alarmingly increasing pace. It’s moving so fast that mentioning any reference here is a mistake because it will be blown away by another data breach so fast that the reference will most likely be forgotten.

The problem is that most of these news are fake.

Continue reading