Bright Stove

Reflecting information risk journey

Applying Baseline Technical Measures for Managing Data Privacy IN the Cloud at Scale

leave a comment »

As a follow-up to the earlier paper on “Baseline Technical Measures for Data Privacy IN the Cloud“, I’m glad to present the second paper in this data privacy in the cloud series, which focuses on applying the principle-based methodology, with the output on the earlier paper to validate the baseline measures against the newly published Indonesia personal data protection law. The paper is now available at the Asia Cloud Computing Association (ACCA) web site. Like to acknowledge the collaborative supports rendered by Ivy Young and Augustine Tobing of Amazon Web Services in helping to validate the analysis and results discussed in the paper, and ACCA for the review and publication. Below is the abstract of the paper.


Abstract

In our previous work[1], we discuss several limitations in current data privacy management standards and guidelines. Those limitations affect the design and implementation of cloud-based applications to ensure data privacy. To address the limitations, we introduced a principle-based methodology (PBM1) that derives 31 technical measures applicable for achieving the shared objectives of 19 common privacy principles from two privacy frameworks and three privacy laws from Asia Pacific and Europe [2-6]. The 19 principles are grouped into five categories, reflecting their broader, shared goals. 

In this paper, we test the applicability of our principle-based methodology and three primary outputs[1] from [1] beyond those privacy laws and frameworks previously discussed. We focus on Indonesia’s recently enacted Personal Data Protection Law (ID PDPL)[7]. As we aim to help systems designers, architects, and data privacy compliance stakeholders to use our approach effectively, we offer guidance on using the methodology to confirm the baseline technical measures and pinpoint any additional measures that may be needed for cloud data privacy. This helps adapt to current, new, or future industry-specific and national regulations in the various global markets where they operate.


[1] The three primary outputs from 1.       Kang, M.-C., C.-H. Chi, and K.-Y. Lam, Baseline Technical Measures for Data Privacy IN the Cloud, in Thought Leadership. 2023, Asia Cloud Computing Association: https://asiacloudcomputing.org/research/resources/. are (1) the list of 19 common privacy principles, (2) the five categories of shared objectives of the 19 privacy principles, and (3) the 31 baseline technical measures.


Written by mengchow

August 19, 2023 at 12:04 pm

Posted in Uncategorized

Baseline Technical Measures for Data Privacy in the Cloud

with one comment

After several months of collaborative work with Prof Kwok-Yan Lam and Dr Chi-Hung Chi at NTU, and several data privacy and security practitioners in the industry, including Ivy Young of Amazon Web Services, Sam Goh of Data-X, Dr Zhan Wang of Envision Digital, Dr Prinya Hom-anek of TMBThanachart Bank, Dr Hing-Yan Lee and Daniele Catteddu of Cloud Security Alliance, and members of the Asia Cloud Computing Association (ACCA), I’m glad to share that the paper consolidating our findings on common privacy principles and proposed technical measures applicable for establishing a baseline for data privacy in the cloud is now published by the ACCA. Below is the abstract and purpose of the paper. You may also watch a presentation of an earlier draft of this paper at the HK/Macau CSA Summit (2022).


Abstract

As the digital economy grows, individuals’ personal data is increasingly being collected, used, and disclosed, be it through online social media, e-commerce, or e-government transactions. As the volume of such personal data increases online, data breaches have become more prevalent, and consumers have increased their demands for stronger privacy protection. Data privacy legislations are not new—frameworks by the Organization for Economic Co-operation and Development (OECD), European Union General Data Protection Regulations (EU GDPR)[1], and Asia-Pacific Economic Cooperation (APEC) have existed as early as 1980. We have seen more policy developments being introduced recently across the global. In ASEAN, the Singapore Personal Data Protection Act (SG PDPA) was enacted more recently in 2012, and the latest being the Thailand Personal Data Protection Act (TH PDPA) that came into force on 1 June 2022.

Against the backdrop of these legal frameworks, businesses and governments are also leveraging advanced cloud services, such as AI/ML and Big Data Analytics to innovate and offer better customer services. As a result, more personal data is being migrated to the cloud, increasing the need for technical measures to enable privacy by design and by default and move beyond mere compliance with legal provisions. While new standards and frameworks have emerged to guide the management of privacy risk on the use of cloud, there are limitations in providing implementation guidance for technical controls to cloud using organizations. This paper thus seeks to fill the gap and provide technical measures that organizations may adopt to develop technical baselines that simplify their regulatory compliance across legal frameworks. We review the principles from the OECD, APEC, the EU GDPR, the SG PDPA, and the newly enforced TH PDPA, identifies a set of 31 technical measures that cloud using organizations may use to achieve common data protection objectives including fulfilling data subject rights, minimizing personal data exposure, preparing to respond to and recovering from data privacy breaches, ensuring security and quality of personal data, and providing transparency and assurance. Our elaboration of the technical measures for achieving these data protection objectives includes references to common cloud-enabled and cloud-native services from major CSPs to provide implementation guidance.

Purpose

The motivation for this paper is to address the organizational needs for privacy compliance and fill the gaps in the existing standards. We adopt a principle-based methodology to identify a set of baseline technical measures suitable for achieving the data protection objectives underlying their privacy principles. The paper further presents guidance to cloud using organizations that cloud-native and cloud-enabled services may be used to implement the baseline technical controls with reference to capabilities available from major Cloud Service Providers (CSPs) including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.


[1] The GDPR repealed and replaced the Data Protection Directive, which was enacted in October 1995.

Written by mengchow

February 8, 2023 at 9:31 pm

On risk, uncertainty, and impact

leave a comment »

Risk management is an approach that is commonly used across many industries. However, the language of risk has not been consistent or easy to understand across existing risk literatures. In particular, the definition of risk is at times mixed with uncertainty (e.g., ISO 31000 and ISO/IEC 73), and described in terms of the value of the asset involved (e.g., ISO 51 and 63). This has not helped in evaluating, and making risk informed decisions. This blog is an attempt to clarify and provide better understanding of these terminologies.

Risk versus Uncertainty

Risk and uncertainty are two separate concepts or ideas. Risk is neither a subset nor a branch of uncertainty principles. As Frank Knight pronounced in his classic work “Risk, uncertainty, and profit”, “If you don’t know for sure what will happen, but you know the odds, that’s risk, and if you don’t even know the odds, that’s uncertainty”(Knight, 2006). This delineation of risk and uncertainty is fundamental and important.

Risk is tied to the possibility of loss, like gambling. Uncertainty, on the other hand, is merely the unknown; loss is not always involved. Yet, uncertainty makes us more uneasy than when we face a situation that has known risks. … The uncertainty we face in the dark has no real risk, just perceived risk, because we do not know, for sure, what’s out there. We desire an order, or perfect knowledge, that comes only when we turn on the lights. In the dark, there is no order (Bernstein, 1999).

As noted above, uncertainty is not necessarily a bad thing. Recognizing uncertainty is part of the decision-making process.

We experience true uncertainty when we do not know the probabilities of the possible outcomes because we do not even know what all of the possible outcomes are. By understanding how truly ignorant we are, we will be able to make better decisions, even as we continue to make mistakes (Peter Edgar in Bernstein, 1999, p. x).

On Risk 

Quantitatively, risk is normally expressed in terms of a probability of occurrence of the threats involved. This probability number is influenced by the presence of vulnerability, and the ease of exploitation of the vulnerability. 

The constituent parts of a risk are therefore the threats and vulnerabilities associated with the system being evaluated. In other words, risk is a function of threats and vulnerabilities, or r = f (t x v) (McChrystal & Butrico, 2021, pp. 10-11;  Stewart, Chapple, & Gibson, 2015, p. 62). Risk exists when there is a threat in or around the system, and the system has vulnerability (or weakness) that can be exploited by the threat. When the exploitation takes place, the risk is realized, or materialized. If a threat has a high probability of occurrence, but the associated vulnerability cannot be easily exploited, then the risk is effectively low. The converse is also true. We can also say that if there is no threat, i.e., r = f (0, v) = 0, there is no risk. Similarly, when there is no vulnerability, i.e., r = f (t, 0) = 0, there is also no risk. There will however always be threats (either introduced by human being, technology, or nature, e.g., hurricane, earthquake), and weaknesses will always exist in any given system. 

In the context of digital information systems, when managing information security risks, we use threat hunting, threat intelligence, and threat detection systems to help identify, detect, and measure threats that are operating or emerging in and around our system, and we use tools such as vulnerability scanning, penetration testing, and patch management systems to identify, detect, and update/patch our systems vulnerabilities. By detecting threats early, and being prepared for their occurrence, we can isolate, block, and/or contain the blast radius (or scope of effect) of the threat thereby reducing risk to our system. Similarly, by getting our systems updated to the latest patch available, or implement workarounds to reduce exploitability of a vulnerability, we reduce our risk exposure level. 

Note that there may also be other factors that can influence risk. For example, time, as in Winn Schwartau’s “Time-based Security” (Schwartau, 2001). In the equity market, the timeliness of information plays a significant role in preventing frauds such as insider trading and ensuring fair market practices. In such a situation, risk of unauthorized disclosure of information that have influence over a company’s stock price changes with time. Before the information goes public (e.g., announcement of a strategic acquisition or merger), the risk of unauthorized disclosure will be closely watched. But once the information has been released to the public, it is no longer confidential, but the integrity will remain important. Risk of unauthorized modification will continue to be an important focus. 

Risk versus impact (or Consequences)

The probability of a risk materializing, as computed from the risk equation (where r = f (t, v)), should not be confused with the potential magnitude of impact that the risk may cause. For example, a statement such as, “the risk is high as it can cause significant financial losses to our organization” is misleading. Does the risk have a high probability of occurrence because of the financial value involved, or because there is a threat that is likely to materialize due to the existence of certain vulnerabilities that can be easily exploited? We need clarity in order to manage the risk effectively. The magnitude of a risk materializing, also known as potential impact, is the outcome or consequence, not inherent in the risk itself. Impact is related to the value of the system as an asset, not the risk per se. Impact assessment is therefore a separate tool that is used in risk management, not in risk analysis. We should not rate a risk as high simply because the value of the asset involved is high. 

By separating impact from the risk measurement, we can make our risk management decisions based on the significance of the risk in relation to the value of assets independently. We can weigh which is more important in a given context, and whether to focus on the value of the asset, or the risk specifically. Considering a two-level high-low risk rating system, we will have four situations:

  1. Low risk, high impact
  2. Low risk, low impact
  3. High risk, low impact 
  4. High risk, high impact

It is clear from this breakdown that our top priority for managing risk will be on systems that fall into situation #4–high risk, high impact. Our next priority, i.e., either #1 or #3, will depend on whether we consider the principle of “security is only as strong as the weakest link” should weigh more than the value of the specific asset involved.  In a highly connected system environment where low and high value assets are interconnected and have dependency on each other, a high-risk issue despite being found on a low value asset may still result in impacting high value asset due to their dependency and/or connectivity. In such a case, situation #3 will take precedence over situation #1. Alternatively, we may isolate systems in #3 and address the risk issue in situation #1 with a higher priority.  In either case, addressing situation #1 will still be desired to prevent or reduce the effect of a Black Swan event (Taleb, 2007) should the low risk materialize on a high value system. We should also continue to monitor and re-evaluate systems in situation #2 to make sure that they neither become the “weakest link” nor a Black Swan.

References

Bernstein, P. L. (1999). Patterns in the Dark – Understanding risk and financial crisis with complexity theory: John Wiley & Sons, Inc.

Knight, F. (2006). Risk, uncertainty and profit. New York: Dover Publications, Inc.

McChrystal, S., & Butrico, A. (2021). Risk – A User’s Guide: Penguin Business.

Schwartau, W. (2001). Time Based Security – Measuring Security and Defensive Strategies in a Networked Environment (Revised ed.): Interpact Press.

Stewart, J. M., Chapple, M., & Gibson, D. (2015). (ISC)2 Certified Information Systems Security Professional (CISSP) Official Study Guide (7th ed.): John Wiley and Sons.

Taleb, N. N. (2007). The Black Swan: Pengiun Books.

Written by mengchow

December 1, 2021 at 10:01 pm

《响应式安全:构建企业信息安全体系》

leave a comment »

三年多年前与中国电子出版社和清华段海新教授启动了翻译《Responsive Security》这本书终于在几个星期前圆满完成出版在中国亚马逊和其它网络书店了。中文书名《响应式安全:构建企业信息安全体系》与英文书名有点差别。主要是为了方便读者搜索关键词能更容易找到这本书。不然的话,更正确的书名应该是《响应式安全:有备无患》。

特别感谢中国电子出版社的刘姣和郑柳洁编辑,Taylor & Francis的瑞君,以及清华大学的段海新教授和王永科博士的幕后工作和支持!

IMG_3969

Written by mengchow

May 27, 2018 at 5:55 pm

台北讲云安全

leave a comment »

Written by mengchow

May 25, 2017 at 5:40 pm

Posted in Uncategorized

Adam Grant on being original

leave a comment »

Written by mengchow

February 20, 2017 at 4:15 pm

Posted in Uncategorized

Fear when it is dark, fear when there is light

with one comment

We have fear of the dark because we can’t see what is in the dark. Many of us probably have similar experience of walking up or down an unlighted stairwell in the middle of the night, or into a dark  room or somewhere. Our mind respond to the change. With a sudden surge of attention, our retinas open without the need for us to give any command as we try to look into the darkness. Our ears try to listen for the slightest sound in vicinity, and our nose try to sense any unusual smell, and any unpleasant smell suddenly seem more foul than usual. Our body also react to any notable temperature change, and if our fear heighten, we start to sweat, along with a series of goose bumps. What happen is that our body is trying to collect data about the surrounding environment, and our brain is working hard to analyze and interpret those data. The least data we get, the more fear our mind generates, which is probably a way to get us to do something – collect more data, or just do something, through which we may get some (more) data from the unknowns in the dark. The “do something” can be different action for different individuals. Some may just try to escape the dark. What we would like ourselves to be able to do is likely to pause, calm ourselves, look for light (the flashlight on our mobile phone is pretty convenient these days), move forward slowly, touch for something to hold, or backtrack. But our legs might have already been stiffen from the fear generated. Even then, many try to calm down and take stock after some frightful wondering time. We give up only when our heart stops. Meanwhile, our mind continues to wonder for a way out or scare us into desirable or undesirable actions.

If you read all the dark stories or news of exploitation and attacks, you may feel that the Cyberspace is a dark place. Many users however don’t seems to have any fear of it. That’s primarily because their experience are often shielded by the layer of Web user interface (web browser, mobile Apps, etc.) that gives them a perception that they are in the light, and that they are in control. Blocking their fear sensors basically. What we need is to surface the known risks so that the darkness in the Cyberspace becomes visible. Besides being educated so that their body/mind sensors would respond to those risks, they need to be trained to be competent to deal with the risks appropriately, so called practice secure computing.

Shutting them out or designating specific device for use in the Cyberspace is unlikely going to change their mind sensors, and influence their behaviors against those risks. On the surface, it will seem that the overall surface area of attack has reduced as a specific channel of exposure gets shut down. Like water, the risk will flow towards those permitted devices, especially those that do not have the level of security protection available on corporate machines. Weak links prevail. More importantly, users will find ways to overcome the restrictions in the name of getting their job done more efficiently. If an insider wants to leak information, he/she will find ways to do it as well.

What’s in the dark stairwell remains dark until we get some light on it. We bring light to counter darkness. The moment we are able to see, our fear subsides. Our other sensors also begin to stand down. However, visibility can also generate fear, like when we encounter a fog or sudden heavy downpour while driving on a highway, or when another vehicle suddenly crosses over from the opposite side of the traffic and heads directly towards us, or when we light up the dark stairwell and immediately see a dead animal in front of us. Partial visibility at times can be worst as our mind starts to interpret whatever it can and may have our imagination running faster than our brain can process. Such situations can cause knee jerk reactions and may result in dire consequences. The “16 waves of Cyber attacks” mentioned in the press on June 9, 2016 have certainly generated much fear of the Cyberspace. Such fear that results from visibility is unlike those of the darkness. It calls for a different kind of response. It is not about collecting more data, but reacting to the present (and also perceived) danger based on what have been learned. If we have to frequently take immediate reactive actions against known visible risks, our heart will also stop beating very soon. Since these are known risks, we can get ourselves prepared and be ready for them so that we can deal with them as “normal” response, and our heart rate needs not surge suddenly. Preparation will have to include not just people knowledge and competency, but also process and infrastructure (technology) readiness.

In short, visibility allows us to see and detect dangers, and gain situation awareness. Readiness is to enable us to contain and reduce the potential impact/damages. Stopping the fog or the heavy storms is not even humanly possible. Do we choose to stop driving then? In many instances, people still drive when there’s a bad weather forecast. Why? They want to live their life and not hide from or stopped by the risks of the nature. As such, like many others, we will continue to face off with the threats of nature when they arrive, and meanwhile, we get ourselves prepared so that we have a lesser chance of being impacted by the danger. When we are already on the road, our readiness will save us at that moment. So we learn about slowing down (having brake, as the technology readied all the time), turn on the head/tail/parking lights so others can see us, and tune in to the weather/traffic channel if available (which is always on in big countries like the US). On top of these, we go for vehicle test and check-ups periodically to gain assurance of the level of our technical readiness.

Some says that a bit of fear is good. I think so too. It gets us to take action to deal with those risks (note that risks are known potential dangers, whereas unknowns are hidden and uncertain.) The challenge however is how to quantify “a bit of fear”. When does a bit become too much? Risk management is a trade-offs, we give away some conveniences, in return for safety or security. Inconveniences are real, affecting our daily life, and consume our energy in many ways. However, a state of safety and security is a perception, a state of mind, something that is not measurable. We feel safe, or secure, when nothing happens. Nothing happens can also be because we have not seen the problem, obscured by other distraction, or not having the capability to see it. However much should we trade-offs remains a challenge. We can never be more secure, since we don’t even know when we get there. Instead, we can be less insecure, by discovering or knowing the vulnerabilities, taking actions to continuously eliminate or reduce their potential for exploitation, and getting ready to respond when they do get exploited, or detect any abnormality. Vulnerabilities can be measured though we may continue to have new ones when old ones get fixed.

A well known depiction of risk, vulnerability, and readiness, is the The Great Wave created by Katsushika Hokusai in 1830 on a woodblock. It portrays the struggle of people whose livelihoods and property are “at risk” from not just the Tsunami, but also the volcano of Mount Fuji. It shows the social, economic, and physical vulnerability of the people, and their capacity and resilience through the design of their boats and the way they oars in parallel with the wave crest. The oarsmen appear to have interwoven their oars into a lattice, perhaps to prevent them being smashed by the giant wave. That’s being ready. Hokusai’s great work of art is a reminder of the awareness of such hazards in Japan as well as the way in which all households, groups, and societies cope with and adapt to such threats to their everyday lives and livelihoods (Wisner, et al., 2004).

Perhaps we need a version of The Great Wave to depict the Cybersecurity challenges and bring about greater awareness of the Cyberspace risks and promote a culture of capacity and readiness against the ever changing vulnerability.

Reference

Today news, June 9, 2016, “Singapore hit by 16 waves of attacks since April last year”.

Wisner, B., et al. (2004). At risk – Natural hazards, people’s vulnerability and disasters, Routledge.

Written by mengchow

July 28, 2016 at 4:05 pm

Brief thought on IoT security 

with one comment

There will be things that are security capable, things that are not security capable, and things that are somewhere in between. What those things can do, and how much an application can trust a given thing should therefore be tiered based on the security capabilities that the thing can do, and what the thing is willing to do in a given context.

Written by mengchow

July 15, 2016 at 11:37 am

Posted in Uncategorized

Lucas Critique 

leave a comment »

Written by mengchow

July 14, 2016 at 12:19 pm

Posted in Uncategorized

When our guard is down

leave a comment »

We don’t normally feel the reality of a criminal attack on the Internet (or so called Cybercrime attack in the Cyberspace these days) until someone we know, especially when a friend, or a relative actually became a victim to such an incident. If we see an accident on the road, we actually see it. Our emotional status changes at that point, and we are likely to become more cautious for at least momentarily, and this heightened state of vigilance will likely stay with us for a short period until the image of the accident has been put behind us. Then nothing happens, and we will let our guard down. Life goes on.

The visibility of risk in the online world (aka Cyberspace) is so opaque that even after learning about an incident that is still ongoing, we go online, everything in front of us (in our own cyber landscape) still looks normal. The scene of incident is not just virtual, but changes dynamically. If the victim is an end user, the affected end device is likely her home computer, tablet, or smartphone, which doesn’t even have a web front, and there’s no network logs available to analyze like what organizations have in most cases. Unless we are physically at the same location as the victim, we have to imagine how the scene looks like. It is not as observable. So our guard will remain down.

One common thing about Cybersecurity incidents is that when we hear or read about it, it is likely that it has already happened before. Otherwise, we may not even find out, especially as an end user. By performing a search online using keywords related to the problem, and as a third person, we will then learn about the danger that we have been so lucky not being exposed, or perhaps not known to have been exposed, but now able to learn how to find out if we are truly lucky, or just being ignorant. I guess that’s one of the benefits of having the Internet.

An old friend called last night. A few hours before, he received a call from someone who claimed to be from Microsoft technical support, who informed him that his machine has been found inflected with a malware, and volunteered to help him solve it. But before they could help him, he has to renew his technical support contract, which costs S$399 to do so. Driven by fear of the unknown malware, and the urgency of the caller’s tone, he complied with the caller’s advice, and proceeded to make the payment online. He then allowed the caller to take over control of his machine remotely, who started installing stuff into it. After the person hang up the phone, while the remote installation continued, he then started to think about what just happened and decided to call me to check if Microsoft will do such a thing. Unfortunately, he had just fallen into a tech support scam 😦 and Microsoft have published quite substantively about the scam at: https://www.microsoft.com/security/online-privacy/avoid-phone-scams.aspx.

As I reflect on this incident, a question emerges on what if I receive such a call myself? Would there be a chance that I get scammed as well? I think there is always a possibility, since I’m also a human being, and can be reacting emotionally or impulsively, depending on how the caller manages the conversation. Furthermore, even as an information security worker, it is impossible for me to know every single possible ways the scammer works. Today they may use tech support, tomorrow another service, and the next day something else that can get me to respond to the way they want it. There are just too many ways to break something or someone, and often not too difficult to do. Social engineering is already a matured craft by itself. Robert B Cialdini has shown in his book “Influence“, so as Kevin D Mitnick in “The Art of Deception“.

When asked about how to stay safe online, the short answer is often “be vigilant.” Unfortunately, it is impossible to be vigilant all the time. It will be highly stressful, and the effects on our health may even be worst that suffering from an online scam. In reality, our guard is often down. We react to situation as it develops. What’s worst is that we also have a tendency to develop and use automation in our brain to take short cuts and react quickly. The default mode is often to react automatically, which is a survival instinct, especially when triggered under pressure, as what Robert B Cialdini has discovered in his research and experience described in “Influence“.

In the organizational context, readiness drills and exercises can help to heighten users’ awareness and build technical infrastructure, and enhance individuals’ competencies to enable faster detection and better responses to security attacks. For example, read my earlier blog on “Responsive Security in Action” in my blog series on Responsive Security. Many organizations have started doing this in the past few years. The security industry (for enterprise market in particular), in general, has been developing more products and services in recent years to facilitate higher security readiness as well. But for consumers at large, people who are not working for big organizations, how to get them to be ready to be safe and secure? I think this is a much more challenging area. Over the years, I have thought about a few ideas, but these are just snippets of tactics, not a complete solution.

For example, can there be virtual security signposts and posters (in the form of warning/alerts, or “watchful eyes“, instead of just advertisements) in the online environment where we browse and roam around regularly? How should the web architecture on the Internet evolve to facilitate security needs? Who should own the outcomes, which dictates the contents, and the delivery?

Who should plan, organize, and fund Cyber readiness drills/exercises activities for citizens who are online as well? How to tell if a drill is real or yet another scam? There is no simple answer to these questions, unfortunately.

What I’ve also realized through a number of incidents involving friends thus far is that money is a common denominator. That’s what most scammers are after (unless you are someone who has more to offer than money). If someone asks for money to be transferred, stop, take a deep breath, and think about it again – must I make this payment, and must it be now? This approach is similar to what Cialdini advises in “Influence” on how not to be scammed into buying things that we don’t need, i.e., turn off the automated reaction mode. Pause, think, then act. It may not be fool proof, since we are human, taking shortcut is in our DNA. But if we can remember to slow down under stressful or questionable situations, it will very likely halt the incident from progressing to a full blown one. Nevertheless, something not happening is not an observable outcome. Bear in mind that the attacker may also take less aggressive steps initially in order to gain our trust, and collect more information about us and our friends and family before executing her true mission. Question why we should trust this person (especially if he/she is someone we haven’t met previously) before proceeding.

Finally, if you are a Microsoft users, do take note of how to contact their official support: https://www.microsoft.com/en-sg/contact.aspx. Copy the contact information in your address book perhaps so it is always handy. For Apple users, I couldn’t find a local contact number for Apple support, but just their general support site: http://www.apple.com/sg/support/contact/, which could still be useful.

Best wishes and a happy new year!

Written by mengchow

January 6, 2016 at 11:19 am