Bright Stove

Reflecting information risk journey

Archive for the ‘Cybersecurity Collaboration’ Category

Fear when it is dark, fear when there is light

leave a comment »

We have fear of the dark because we can’t see what is in the dark. Many of us probably have similar experience of walking up or down an unlighted stairwell in the middle of the night, or into a dark  room or somewhere. Our mind respond to the change. With a sudden surge of attention, our retinas open without the need for us to give any command as we try to look into the darkness. Our ears try to listen for the slightest sound in vicinity, and our nose try to sense any unusual smell, and any unpleasant smell suddenly seem more foul than usual. Our body also react to any notable temperature change, and if our fear heighten, we start to sweat, along with a series of goose bumps. What happen is that our body is trying to collect data about the surrounding environment, and our brain is working hard to analyze and interpret those data. The least data we get, the more fear our mind generates, which is probably a way to get us to do something – collect more data, or just do something, through which we may get some (more) data from the unknowns in the dark. The “do something” can be different action for different individuals. Some may just try to escape the dark. What we would like ourselves to be able to do is likely to pause, calm ourselves, look for light (the flashlight on our mobile phone is pretty convenient these days), move forward slowly, touch for something to hold, or backtrack. But our legs might have already been stiffen from the fear generated. Even then, many try to calm down and take stock after some frightful wondering time. We give up only when our heart stops. Meanwhile, our mind continues to wonder for a way out or scare us into desirable or undesirable actions.

If you read all the dark stories or news of exploitation and attacks, you may feel that the Cyberspace is a dark place. Many users however don’t seems to have any fear of it. That’s primarily because their experience are often shielded by the layer of Web user interface (web browser, mobile Apps, etc.) that gives them a perception that they are in the light, and that they are in control. Blocking their fear sensors basically. What we need is to surface the known risks so that the darkness in the Cyberspace becomes visible. Besides being educated so that their body/mind sensors would respond to those risks, they need to be trained to be competent to deal with the risks appropriately, so called practice secure computing.

Shutting them out or designating specific device for use in the Cyberspace is unlikely going to change their mind sensors, and influence their behaviors against those risks. On the surface, it will seem that the overall surface area of attack has reduced as a specific channel of exposure gets shut down. Like water, the risk will flow towards those permitted devices, especially those that do not have the level of security protection available on corporate machines. Weak links prevail. More importantly, users will find ways to overcome the restrictions in the name of getting their job done more efficiently. If an insider wants to leak information, he/she will find ways to do it as well.

What’s in the dark stairwell remains dark until we get some light on it. We bring light to counter darkness. The moment we are able to see, our fear subsides. Our other sensors also begin to stand down. However, visibility can also generate fear, like when we encounter a fog or sudden heavy downpour while driving on a highway, or when another vehicle suddenly crosses over from the opposite side of the traffic and heads directly towards us, or when we light up the dark stairwell and immediately see a dead animal in front of us. Partial visibility at times can be worst as our mind starts to interpret whatever it can and may have our imagination running faster than our brain can process. Such situations can cause knee jerk reactions and may result in dire consequences. The “16 waves of Cyber attacks” mentioned in the press on June 9, 2016 have certainly generated much fear of the Cyberspace. Such fear that results from visibility is unlike those of the darkness. It calls for a different kind of response. It is not about collecting more data, but reacting to the present (and also perceived) danger based on what have been learned. If we have to frequently take immediate reactive actions against known visible risks, our heart will also stop beating very soon. Since these are known risks, we can get ourselves prepared and be ready for them so that we can deal with them as “normal” response, and our heart rate needs not surge suddenly. Preparation will have to include not just people knowledge and competency, but also process and infrastructure (technology) readiness.

In short, visibility allows us to see and detect dangers, and gain situation awareness. Readiness is to enable us to contain and reduce the potential impact/damages. Stopping the fog or the heavy storms is not even humanly possible. Do we choose to stop driving then? In many instances, people still drive when there’s a bad weather forecast. Why? They want to live their life and not hide from or stopped by the risks of the nature. As such, like many others, we will continue to face off with the threats of nature when they arrive, and meanwhile, we get ourselves prepared so that we have a lesser chance of being impacted by the danger. When we are already on the road, our readiness will save us at that moment. So we learn about slowing down (having brake, as the technology readied all the time), turn on the head/tail/parking lights so others can see us, and tune in to the weather/traffic channel if available (which is always on in big countries like the US). On top of these, we go for vehicle test and check-ups periodically to gain assurance of the level of our technical readiness.

Some says that a bit of fear is good. I think so too. It gets us to take action to deal with those risks (note that risks are known potential dangers, whereas unknowns are hidden and uncertain.) The challenge however is how to quantify “a bit of fear”. When does a bit become too much? Risk management is a trade-offs, we give away some conveniences, in return for safety or security. Inconveniences are real, affecting our daily life, and consume our energy in many ways. However, a state of safety and security is a perception, a state of mind, something that is not measurable. We feel safe, or secure, when nothing happens. Nothing happens can also be because we have not seen the problem, obscured by other distraction, or not having the capability to see it. However much should we trade-offs remains a challenge. We can never be more secure, since we don’t even know when we get there. Instead, we can be less insecure, by discovering or knowing the vulnerabilities, taking actions to continuously eliminate or reduce their potential for exploitation, and getting ready to respond when they do get exploited, or detect any abnormality. Vulnerabilities can be measured though we may continue to have new ones when old ones get fixed.

A well known depiction of risk, vulnerability, and readiness, is the The Great Wave created by Katsushika Hokusai in 1830 on a woodblock. It portrays the struggle of people whose livelihoods and property are “at risk” from not just the Tsunami, but also the volcano of Mount Fuji. It shows the social, economic, and physical vulnerability of the people, and their capacity and resilience through the design of their boats and the way they oars in parallel with the wave crest. The oarsmen appear to have interwoven their oars into a lattice, perhaps to prevent them being smashed by the giant wave. That’s being ready. Hokusai’s great work of art is a reminder of the awareness of such hazards in Japan as well as the way in which all households, groups, and societies cope with and adapt to such threats to their everyday lives and livelihoods (Wisner, et al., 2004).

Perhaps we need a version of The Great Wave to depict the Cybersecurity challenges and bring about greater awareness of the Cyberspace risks and promote a culture of capacity and readiness against the ever changing vulnerability.

Reference

Today news, June 9, 2016, “Singapore hit by 16 waves of attacks since April last year”.

Wisner, B., et al. (2004). At risk – Natural hazards, people’s vulnerability and disasters, Routledge.

Advertisements

Written by mengchow

July 28, 2016 at 4:05 pm

When our guard is down

leave a comment »

We don’t normally feel the reality of a criminal attack on the Internet (or so called Cybercrime attack in the Cyberspace these days) until someone we know, especially when a friend, or a relative actually became a victim to such an incident. If we see an accident on the road, we actually see it. Our emotional status changes at that point, and we are likely to become more cautious for at least momentarily, and this heightened state of vigilance will likely stay with us for a short period until the image of the accident has been put behind us. Then nothing happens, and we will let our guard down. Life goes on.

The visibility of risk in the online world (aka Cyberspace) is so opaque that even after learning about an incident that is still ongoing, we go online, everything in front of us (in our own cyber landscape) still looks normal. The scene of incident is not just virtual, but changes dynamically. If the victim is an end user, the affected end device is likely her home computer, tablet, or smartphone, which doesn’t even have a web front, and there’s no network logs available to analyze like what organizations have in most cases. Unless we are physically at the same location as the victim, we have to imagine how the scene looks like. It is not as observable. So our guard will remain down.

One common thing about Cybersecurity incidents is that when we hear or read about it, it is likely that it has already happened before. Otherwise, we may not even find out, especially as an end user. By performing a search online using keywords related to the problem, and as a third person, we will then learn about the danger that we have been so lucky not being exposed, or perhaps not known to have been exposed, but now able to learn how to find out if we are truly lucky, or just being ignorant. I guess that’s one of the benefits of having the Internet.

An old friend called last night. A few hours before, he received a call from someone who claimed to be from Microsoft technical support, who informed him that his machine has been found inflected with a malware, and volunteered to help him solve it. But before they could help him, he has to renew his technical support contract, which costs S$399 to do so. Driven by fear of the unknown malware, and the urgency of the caller’s tone, he complied with the caller’s advice, and proceeded to make the payment online. He then allowed the caller to take over control of his machine remotely, who started installing stuff into it. After the person hang up the phone, while the remote installation continued, he then started to think about what just happened and decided to call me to check if Microsoft will do such a thing. Unfortunately, he had just fallen into a tech support scam 😦 and Microsoft have published quite substantively about the scam at: https://www.microsoft.com/security/online-privacy/avoid-phone-scams.aspx.

As I reflect on this incident, a question emerges on what if I receive such a call myself? Would there be a chance that I get scammed as well? I think there is always a possibility, since I’m also a human being, and can be reacting emotionally or impulsively, depending on how the caller manages the conversation. Furthermore, even as an information security worker, it is impossible for me to know every single possible ways the scammer works. Today they may use tech support, tomorrow another service, and the next day something else that can get me to respond to the way they want it. There are just too many ways to break something or someone, and often not too difficult to do. Social engineering is already a matured craft by itself. Robert B Cialdini has shown in his book “Influence“, so as Kevin D Mitnick in “The Art of Deception“.

When asked about how to stay safe online, the short answer is often “be vigilant.” Unfortunately, it is impossible to be vigilant all the time. It will be highly stressful, and the effects on our health may even be worst that suffering from an online scam. In reality, our guard is often down. We react to situation as it develops. What’s worst is that we also have a tendency to develop and use automation in our brain to take short cuts and react quickly. The default mode is often to react automatically, which is a survival instinct, especially when triggered under pressure, as what Robert B Cialdini has discovered in his research and experience described in “Influence“.

In the organizational context, readiness drills and exercises can help to heighten users’ awareness and build technical infrastructure, and enhance individuals’ competencies to enable faster detection and better responses to security attacks. For example, read my earlier blog on “Responsive Security in Action” in my blog series on Responsive Security. Many organizations have started doing this in the past few years. The security industry (for enterprise market in particular), in general, has been developing more products and services in recent years to facilitate higher security readiness as well. But for consumers at large, people who are not working for big organizations, how to get them to be ready to be safe and secure? I think this is a much more challenging area. Over the years, I have thought about a few ideas, but these are just snippets of tactics, not a complete solution.

For example, can there be virtual security signposts and posters (in the form of warning/alerts, or “watchful eyes“, instead of just advertisements) in the online environment where we browse and roam around regularly? How should the web architecture on the Internet evolve to facilitate security needs? Who should own the outcomes, which dictates the contents, and the delivery?

Who should plan, organize, and fund Cyber readiness drills/exercises activities for citizens who are online as well? How to tell if a drill is real or yet another scam? There is no simple answer to these questions, unfortunately.

What I’ve also realized through a number of incidents involving friends thus far is that money is a common denominator. That’s what most scammers are after (unless you are someone who has more to offer than money). If someone asks for money to be transferred, stop, take a deep breath, and think about it again – must I make this payment, and must it be now? This approach is similar to what Cialdini advises in “Influence” on how not to be scammed into buying things that we don’t need, i.e., turn off the automated reaction mode. Pause, think, then act. It may not be fool proof, since we are human, taking shortcut is in our DNA. But if we can remember to slow down under stressful or questionable situations, it will very likely halt the incident from progressing to a full blown one. Nevertheless, something not happening is not an observable outcome. Bear in mind that the attacker may also take less aggressive steps initially in order to gain our trust, and collect more information about us and our friends and family before executing her true mission. Question why we should trust this person (especially if he/she is someone we haven’t met previously) before proceeding.

Finally, if you are a Microsoft users, do take note of how to contact their official support: https://www.microsoft.com/en-sg/contact.aspx. Copy the contact information in your address book perhaps so it is always handy. For Apple users, I couldn’t find a local contact number for Apple support, but just their general support site: http://www.apple.com/sg/support/contact/, which could still be useful.

Best wishes and a happy new year!

Written by mengchow

January 6, 2016 at 11:19 am

Responsive Security – Be Ready to Be Secure

with 5 comments

After much anticipation, my new book, “Responsive Security – Be Ready to Be Secure“, is finally published today. Thanks to Prof Pauline Reich of Waseda University, and Chuan Wei Hoo, who helped to proof read the earlier drafts, my publisher, Ruijun He, my editor, Iris Fahrer, and many friends and family members for all the supports and assistance rendered throughout the long process to make this possible.

Image

The book is based on my thesis on a Piezoelectric Approach on Information Security Risk Management, which captures the past decade of my experience and learning from my practice and fellow practitioners whom I have the opportunity to work with. The book walks through our current knowledge and principles of practice in information security risk management, with discourses on the underlying issues and dilemmas in a constantly changing risk environment. It introduces the concepts of responsiveness, and highlights the importance of readiness and preparedness in face of changes that we may not always able to anticipate, and lest unable to predict. Responsive Security focuses on events that could lead to systems failures rather than the current industry’s focus on the search for vulnerabilities and learning how perpetrators exploit and attack.

If you are interested to find out more about the Responsive Security concepts and approach, the book is now available at CRC Press (http://www.crcpress.com/product/isbn/9781466584303) and also Amazon, where an e-book version has also been published.

12th RAISE Forum Meeting at Jinan, Shandong

leave a comment »

Talking about Shandong in the previous blog (“Before the ashes turn cold“) yesterday, in fact, I just came back from our 12th RAISE Forum meeting which was held at Jinan, the capital city of Shandong province in China on March 27 and 28, 2013. The meeting was co-sponsored and jointly organized by Beijing Powertime (北京时代新威) and Timesure, supported by the Association of China Information Security Industry (ACISI), and co-sponsored by (ISC)2.

IMG_0269

Unlike previous gatherings, the 12th meeting started with a half-day public seminar participated by about 150 professionals mainly from Shandong, and a number of other cities in China. The keynotes of the seminar were given Mr Wu Yafei, Chair of the the ACISI (who is also Executive Director of the Information Security department of the State Information Center, SIC), and Professor Lv Shuwang (the inventor of the SMS4 cryptographic hashing algorithm).

 IMG_0274 IMG_0277

Prof Lv spoke about the nature of Internet and internet, and the importance of knowledge security. In accordance to Prof Lv, knowledge security is a natural progress from information security as we evolve from an information-based economy to knowledge-based economy. Knowledge security is critical not just to organization, or individuals, but also the issues of preserving the massive knowledge from a nation’s civilization and cultural heritage perspective. Knowledge security requires a secure Cyberspace, a Cyberspace that operates on network in which its growth, reliability, maintenance, and security are accorded with national level coordination and protection, as preserving knowledge of a nation’s culture and civilization is a national issue. Today’s Internet is however rooted in the US and not a true internet network where there’s mutual connection between a nation’s public (or citizen) network and US or other nation’s public networks. To have a truly internet network, China needs to have its own public network to begin with. Currently, China’s public Internet network (as well other many other countries’ public Internet) shares a portion of the global Internet, “like a tenant on a rental property”, says Prof Lv. As such, security problems on the Internet continues to proliferate and cannot be resolved effectively. This is not an ideal condition for China’s knowledge security. Prof Lv therefore asserts that “China doesn’t have Internet”. Nevertheless, expecting the global Internet to have its root removed and made completely open is also impractical, Prof Lv concluded.

At the public seminar, Mr Ning Jiajun, retired Chief Engineer of SIC, also shared his thoughts on the Information Security issues and challenges in China, and discussed on the need for a basic Information Security Law, or Ordinance. This is necessary to address the fundamental legal principles, and basic system requirements, in support of more comprehensive information security specialization laws for the security governance of each industry sector.

In the professional certification arena, Mr Wang Xinjie of Beijing Powertime shared the status of the new work item on Information Security (IS) Professional Certification in ISO, which is still in an extended Study Period (totaling 12 months now); the status of CISSP adoption in China (which has more than 600 certified professionals as of March 2013); and the development of a new Certified Information Security Auditor (CISP-Auditor) in China. The idea of the Information Security Auditor is focused on developing a community of professionals who will be skilled at auditing (or validating) the information security practice of organizations. The practice may be based on ISO/IEC 27001 ISMS standard, or other approaches adopted by the organization, or mandated by specific industry regulations.

In addition to the China’s experts’ presentations, representatives from RAISE Forum members also spoke in the public seminar. Mr Koji Nakao presented the status of security standardization at ISO/IEC JTC 1 SC 27 and ITU-T SG17, including the current work plan and the areas of focus in the near term. Prof Hueng Youl Youm of Soonchunhyang University, South Korea, presented the status of Personal Information Management Systems (PIMS) standardization in ISO/IEC JTC 1/SC 27 and also within Korea itself. I shared my thoughts on the Responsive Security approach for information security risk management (which I shall discuss in future blogs perhaps).

IMG_0265
  IMG_0275

The closed-door meeting of the RAISE Forum continues in the afternoon and whole day the next day at the Institute of Information and Communications Research (CIIIC). In person at the meeting were members from Japan, Singapore, South Korea, P.R. China, Thailand, and also representative of (ISC)2, while Malaysia and Chinese Taipei’s representatives joined the discussion and presentations via WebEx teleconference facility online. 

Besides the usual updates on ISO/IEC JTC 1/SC 27 and ITU-T SG 17 standards development activities, the meeting also discussed about some recent Cybersecurity development, such as the Obama’s Executive Order, Japan’s Cybersecurity strategy development, the very recent South Korea Cyber attack incident, and Thailand’s Cyber frauds incidents involving security of smartphone applications. The international standardization activities that are of interest includes the revision of the ISO/IEC 27001 and 27002 standards (both are currently at DIS stage, likely to be published before end of this year), cloud security standards, which includes ISO/IEC 27017, and 27036, and the new work item in WG 4 on the technology aspects), and PIMS related standards efforts. There were also much deliberation on the scope of a RAISE Forum project on “Information Security Audit Framework”, which is currently under development. The result of (ISC)2 2013 Workforce Study report, and the recent RAISE Forum initiated Information Security Management Practice survey results were also discussed. The latter will be shared in a separate update in a few weeks.

The meeting closed with the thanking of the organizers and sponsors, and also a short discussion on the 13th RAISE Forum meeting. This year is in fact the 10th year anniversary of the RAISE Forum, since its inauguration in Nov 2004. The 13th meeting is planned to be held before year-end, venue to be confirmed, and will be held as a 10th anniversary celebration event.

11th RAISE Forum Meeting

leave a comment »

Last week in Tokyo, members of the RAISE Forum gathered for the 11th meeting since its inauguration in November 2004. In the past two to three years, activities and participations in the Forum meetings seemed to have slowed down, but core members from Japan, South Korea, Chinese Taipei, Malaysia, and Singapore continued to be active in organising and facilitating the proceedings, focusing mainly on information sharing and keeping each other updated on their respective economies’ developments (in terms of information security and standards). Malaysia, as one of the founding members, also continued to contribute through remote participation (thanks also to the WebEx conferencing tool) even though they couldn’t get the funding to attend the meeting physically.

In this meeting, there were two interesting developments. We have our mainland China’s members sending four representatives and providing two contributions to the proceedings, expanding the members’ presence in the meeting and increasing the level of activities in the forum. At the close of the meeting, we also agreed on two new initiatives to pursue forward. As this is still a semi-open forum, I shall not discuss more details about the new work items proposed until we have something more concrete to share. Meanwhile, if anyone in Asia has interest to participate and contribute (not just observe and listen ;-)) to improve the sharing of information security learning and experience, feel free to drop a comment here, or send a direct message to us in Twitter @raiseforum, or our alternative RAISE Forum group site at LinkedIn.

Special thanks to Japan NICT for their sponsorship for the meeting, and our Japanese members for organising the logistics and administrative supports, including the reception gathering, which all made the meeting possible and successfully held for the 11th times. Our next meeting will be held in mainland China, organise by our P.R. China members.

Stay in touch!

Written by mengchow

August 19, 2012 at 6:40 am

%d bloggers like this: