Argon Incident

Monday, 9:39 a.m.

The phone rings: the manager of a company has just been hacked and needs help. The caller explains that he was shocked when he woke up to see several calls in the absence of customers and journalists trying to reach him. Listening to the first messages, he quickly realized that his company had been hacked and that the information had ended up on the front page of the mass media. The media put out large figures of exfiltered information, creating panic among customers. Some clients even threaten to break their contracts.

We collect information on the impacts observed and the response measures already taken by the company and take the company's address. The customer informs us that he has stopped the hacked application.

Monday 9:56 am

As with all emergencies, we bring our incident response team together in our crisis management room for debriefing. In this case, we face three challenges:

1- help the customer respond effectively to the incident and return to production as quickly as possible,

2- reassure the company's customers that the situation will be under control and that vigorous response measures will be put in place

3- help the client to effectively manage his crisis communication.

Two incident response experts from our team rushed to the client's location to take charge of the situation.

Monday, 10:47 a.m.

We've arrived at the premises of the hacked company. The phone doesn't stop ringing and the first person in charge of the company is at the oven and the mill. At the other end of the line, disgruntled customers want to know if their personal data has been exfiltrated. Some are threatening to sue the company, while others are simply asking to break their contract. And it doesn't end there.

Monday, 10:56 a.m.

We debrief with the customer and identify measures to contain the incident and prevent it from spreading without the network. We help him write a message that is then sent to all his customers: the response has already begun.

Monday 11:15 a.m.

Containment measures are in place. Next step: we start the investigations. Our first objective: to determine the extent of the hacking, its modus operandi and the weaknesses in the network that were exploited.

Monday 11:47 a.m.

We have some interesting leads. Our team is starting the development of a script that will analyze the collected data, allowing us to determine the exact number of exfiltered data. This number is well below the number quoted in the media. Now we just need to determine the M.O. of the attack. An in-depth analysis of the impacted server and a correlation with external data allow us to determine how the hacker gained access to the network.

We reconstruct the various stages of the attack and determine which network shortcomings were exploited. The hacker took advantage of an application flaw that allowed a user (with an account in the application) to access the accounts of other users, after authenticating himself. The vulnerability cannot therefore be easily exploited on a large scale.

Monday 1:15 p.m.

Debriefing with the client. At this stage, the bleeding is stopped and the BP drops a notch.

Monday, 1:25 p.m.

Next objective: determine if the hacker is still in the network in order to eject him and close all doors to prevent him from committing much more damage (if he realizes that his presence in the network has been identified). We do this by plugging our CDS cyber threat detection technology into the network. The CDS begins to capture and analyze network traffic to detect traces of attacks or abnormal behaviour. At the same time, our two security experts search the network for any traces of illegitimate presence. Linking the information provided by the CDS and the manual analysis of our experts allows us to confirm that the hacker is not in the network.

Monday 3:00 p.m.

Debriefing. Results are communicated to the client. Blood pressure's dropping. A second message is written and sent to the client.

Monday, 3:27 p.m.

It is essential that the company returns to production as quickly as possible to minimize the impact on its customers. We must therefore test the hacked application in order to identify its vulnerabilities and flaws and correct them. The company gives us access to an instance of its application in a test environment. We dispatch one of our intrusion testing experts (commonly known as a slope tester) to test the application from top to bottom.

Monday 6:02 p.m.

While waiting for the arrival of our grader, we set up the test network with the help of the client's IT team.

Monday 6:49 p.m.

Arrival of our slope. He has carte blanche for his test...sky is the limit.

Monday, 11:54 p.m.

Our intrusion testing expert has come up with some interesting leads.

Tuesday, 2:00 a.m.

Debriefing with the client. Intrusion test continues. Our tester confirms that the application has a good level of security and that a good level of expertise is required to find and exploit the flaw in question. Everything suggests that we are dealing with a targeted attack. During the intrusion test, other vulnerabilities were found and attack exploits are being written by our slope to exploit them.

Tuesday 6 a.m.

Some of the vulnerabilities found were exploited. A global mapping of the situation is drawn up by the StreamScan team by linking the findings of the intrusion test, the result of our experts' investigation in response to incidents and the security events generated by our CDS technology.

Tuesday, 7 a.m.

Debriefing with the client. The main corrective recommendations are made to the client. The client's IT team works on correcting them with our support. We continue to monitor our client's network via the CDS, detect ongoing attacks and block them. We are able to confirm that there is no link between these attacks and the current incident, which reassures the already nervous and frightened customer.

Tuesday, 7:30 a.m.

Deserved rest from our slope climber and our incident response specialists who worked through the night. The situation being under control, the monitoring of the network security via the CDS is entrusted to our DRG (Detection and Response Managed) team. This team remotely supervises the security of our customers' networks and takes action as soon as an anomaly or something suspicious is detected.

Wednesday 4:21 p.m.

Application vulnerabilities have been fixed. Our slopeer comes back to the customer and retests the application. He confirms that everything is OK and that we can put the application back into production.

Wednesday 9:30 p.m.
Debriefing with the customer and we agree to put the application back into production the next day at 9:00 am.

Thursday 9:00 a.m.

One of the critical steps in incident management is to ensure that the incident does not reoccur when you return to production. Once the application is back in production, it is closely monitored by our CDS technology. Remotely, from our premises, our DRG team's cyber threat hunters monitor any suspicious movement on the network. Every suspicious network flow or packet is dissected, abnormal behaviour is scrutinized.

At the same time, the company issues a press release informing its customers that its application has been secured and is back in production.

Post-incident monitoring will take place for several days to confirm that the situation is completely under control. Daily reports are provided to customers.

Two weeks later...

We meet with the company for an overall feedback on the incident (post-mortem meeting). The customer is satisfied with our quick response (incident taken care of in their premises only 1 hour after his call) as well as with our efficient management of the incident, which enabled them to return quickly to production.

He decides to entrust us with the remote supervision of the security of his network over the long term, via our CDS technology.