Elastic SIEM

Elastic Stack Overview

chevron-rightLab 1.1 - Elastic Agent Configurationhashtag

Lab 1.1: Elastic Agent Configuration

Objective: Demonstrate use of Fleet and Elastic Agents.

Reference Material:


In this lab we will walk through how to configure Elastic Agents using Fleet

  1. In Kibana, navigate to Management -> Fleet

    1-1. Select Fleet in the Management section of the menu.

  1. In order to get started with Elastic Endpoint, we will configure an Elastic Agent.

    2-1. Select Add Agent

    An Agent Policy is required to add an Agent. From this menu we can add a Policy and add integrations or make changes to the default later.

    2-2. Name your policy Agent Policy Lab.

    2-3. Select Create Policy. Note that this may take a moment to create.

    2-4. Leave the recommended setting Enroll in Fleet selected.

    The Elastic Stack is now listening for the agent to enroll in Fleet.

    2-5. Select Windows as your platform. You will be presented with a command to run on your Windows endpoint.

    For this lab, only the last line of the command is needed as we have prestaged the agent on the host machine.

    Note: Be sure to copy the last line of the command to your clipboard as it will be used in the Install Elastic Agent step.

  1. Open your Windows endpoint in Strigo.

    3-1. At the top left of the Strigo terminal, select the down arrow to the right of the student-lab text.

    3-2. In the menu, select Windows-Endpoint.

  1. Install Elastic Agent on your Windows endpoint

    4-1. Open File Explorer, select This PC then select Windows 10 (C:)

    4-2. Double click the Agent folder to open the C:\Agent directory.

    4-3. Right click the zipped folder and select Extract All

    4-4. In the Extract Compressed (Zipped) Folders window, select Extract

    If you de-selected the Show extracted files when complete option, you will need to navigate to the folder that was extracted to view the contents.

    4-5. In the folder with the extracted Elastic Agent files, Shift + Right click within the File Explorer window to present additional options within the drop-down pane.

    4-6. In the menu, select Open PowerShell window here

    4-7. In the PowerShell terminal, paste and run the Elastic Agent install command that was copied from Agent configuration step.

    The command syntax should start with:

    .\elastic-agent.exe install --url=https://i-....

    4-8. The Elastic Agent Installer will ask if you want to continue. Enter Y (Yes) when prompted.

  1. Verify the Elastic Agent is enrolled and sending data

    5-1. Navigate back to Kibana.

    5-2. The Agent should now be enrolled, and incoming data confirmed.

    5-3. Close the Add Agent pane.

    There should now be an additional host listed under the Agents tab with the policy you created previously.

    5-4. On the Fleet page, select the host student

    5-5. Select the Logs tab.

    Here you can ensure you are receiving metrics, application, security, and system logs.


Summary

In this lab we learned how we would configure an Elastic Agent with Fleet.

chevron-rightLab 1.2 - Configuring Agent Policies and Integrationshashtag

Lab 1.2: Configuring Agent Policies and Integrations

Objective: Demonstrate use of Fleet and Elastic Agents.

Reference Material:


In the last lab, we observed the steps required to add an Elastic Agent so we would be ready to configure Elastic Agent policies. In this lab, we will configure the Agent Policy created specifically for Endpoint Security.

  1. In Kibana, navigate to Management -> Fleet -> Agent policies

    1-1. Select Fleet in the Management section of the menu.

    1-2. Select the Agent policies tab.

  1. Open the Agent Policy Lab policy.

    In the previous lab, you created an agent policy while adding an Elastic Agent.

    2-1. Select the policy named Agent Policy Lab

  1. Add the Elastic Defend integration to the Agent Policy Lab policy.

    3-1. In the Agent Policy Lab page, select Add integration

    The Integrations page is displayed, note that there are many integrations available.

    3-2. Select Elastic Defend

    3-3. Select + Add Elastic Defend

    3-4. For the Integration name, enter elastic-defend-lab

    3-5. Ensure that the configuration settings are set to Complete EDR (Endpoint Detection & Response).

    3-6. Ensure that Existing hosts is selected and the Agent policy is set to Agent Policy Lab

    3-7. Select Save and continue

    3-8. Select Save and deploy changes when prompted. This may take a few moments.

    The newly configured Elastic Agent will connect back to Elasticsearch and begin installing the Endpoint Security integration. Before we check on the status of the agent, we need to make a change to the Endpoint Security policy we just created.

  1. Update the Agent Policy Lab agent policy.

    4-1. Navigate to Management -> Fleet -> Agent policies

    4-2. Select the agent policy you created

    4-3. Select elastic-defend-lab

    4-4. Scroll down to the bottom of the integration page and select Enabled under Register as antivirus

    4-5. Select Save integration

    As this section suggests, it will register Elastic Security as an antivirus and disable Windows Defender fully on our Windows 10 endpoint.

    4-6. Select Save and deploy changes when prompted.

  1. Verify the Elastic Agent is healthy.

    5-1. Check the status of the Elastic Agent by navigating back to the Agents tab and selecting the hostname student.

    5-2. Within the overview section, ensure the status is Healthy.

    Under the integrations tab, you can see that within the Policy Response dropdowns that the responses have successfully connected to the Agent.


Summary

In this lab, we configured an Elastic Agent policy with the Elastic Defend integration.

Discover

chevron-rightLab 3.1 - Discover - Getting started with Kibanahashtag

Lab 3.1: Discover - Getting started with Kibana

Objective: Become familiar with Kibana and the features of Discover.

Reference Material:


  1. Navigate to Discover.

    1-1. Select the Discover button under Analytics in Kibana.

    Alternatively you can navigate to the hamburger menu and select Discover under Analytics.

  1. Adjust the Time Picker.

    2-1. Adjust the time picker by selecting the Calendar, then Day 0 under Commonly Used. Kibana will automatically refresh and update the page when picking a commonly used time range

  1. Observe the available Data Views.

    3-1. Select the Data View dropdown to observe which data views are available.

  1. Review document results via built-in histogram.

    4-1. Observe the number of total hits on the documents tab. Hover your cursor over the largest green bar to see the document count for the first time bucket in the histogram. Click on the largest green bar.

    4-2. Observe how your time range has changed to focus on the specific time range you choose from the histogram. Select Day 0 again to set your time picker to our original range

  1. Apply filters to gain additional insight about the data.

    5-1. To create a filter from the Available Fields list, select the name of a field to view the top 10 values. In this example we will use source.address. Select the plus sign (+) next to one of the values to filter for it. Your filter will now appear under the query bar. To remove this filter select the X.

    Your filter will now appear under the query bar. To remove this filter select the X.

    5-2. Let’s create this filter in a different way. Select (+). Then select a field in the field drop down menu. Next, select the is operator, and an available value. Finally, select Add Filter. This new filter will provide the same results as the one created in Step 5-1.

  1. Inspect individual documents from the filtered results.

    6-1. Expand one of the documents in the document table by selecting the arrow at the left of its row.

    6-2. Observe the fields and value data associated with this document and hover over the Actions column to view the options available.

    6-3. Choose an interesting field, and select the minus sign (-) under actions to filter the value out. This will exclude all results that match the filter

  1. Leverage the query menu to apply different filter options.

    7-1. You should now have two filters applied. Select the query menu button above the filters.

    7-2. Select Apply to all to view options that apply to every filter,

    The options available are:

    • Enable all - Enables any disabled filters.

    • Disable all - Disables any enabled filters.

    • Invert inclusion - Inverts the inclusion/exclusion of all filters.

    • Pin all - All filters will stay in your filter bar and be applied to any Dashboards, Visualizations, etc that you navigate to until Unpinned.

    • Unpin all - All filters will be unpinned from the filter bar.

    The example below would Pin all filters across apps in Kibana.

  1. Observe the options available for individual filters.

    8-1. Select one of the filters to observe the options available.

    • Pin across all apps - Filters will stay in your filter bar and be applied to any Dashboards, Visualizations, etc that you navigate to until Unpinned

    • Edit filter - Change filter, operators, value, add additional filters, and custom labels

    • Exclude results/Include results - Exclude/invert the filter from search results

    • Temporarily disable/Re-enable - Disables/Re-enables filters

    • Delete - Remove the filter entirely

  1. Inspect a new document with the new filters applied.

    9-1. Expand a new document from the updated results.

    9-2. Hover over the Actions column menu and choose the Toggle column in table option for a different field to view it as a column in the document table.

    9-3. Take a look at the results table. You will see that the new column has been added. Adjust the size of your columns, if needed, by dragging and dropping the edges of each column

    9-4. Select the field name of the new column you just created. A popup menu will give you options for sorting by the field, moving the field, and removing the field as well as some column specific options such as copying the field name or value to the clipboard. Select the X to remove this column

    You can also add columns to the view by hovering over a field in the Available Fields list and selecting the plus sign (+)


Summary

In this lab, we learned how to use the basic features of Discover to sort and filter data.

chevron-rightLab 3.2 - Searching with KQL and Lucenehashtag

Lab 3.2: Searching with KQL and Lucene

Objective: Demonstrate use of KQL and Lucene search syntax.

Reference Material:


In this lab we will learn how to switch between the Kibana Query Language and Lucene Search Syntax to parse logs in the Discover app.

The default search language in Kibana is Kibana Query Language. Lucene only queries will not work unless the selected search language is switched to Lucene.

  1. Learn to switch between KQL and Lucene.

    1-1. Click on the hamburger menu to the left of the search bar

    1-2. Select Language

    1-3. Select either KQL or Lucene

  2. Get the following info:

    1. What is the total amount of zeek documents for Day 0/Aug 1 2018?

    2. How many documents are from the Zeek http.log?

    3. How many documents exist from the http.log where the request method is either GET or POST?

    4. How many documents exist from the http.log where the request method is POST and the response status is not 200?

    5. How many documents exist from the http.log where the request method is POST and there is no response code?

    6. How many documents are from the Zeek dns.log?

    7. How many documents from the dns.log contain a query with the Top Level Domain (TLD) of .io?

    8. How many connections were made to hosts in the 172.16.100.0/24 network?

    9. How many results exist for originating ports greater than 1024?

    10. How many results exist for originating ports between (and including) 10 and 1024?

    11. Write a regular expression to match on any IP address. How many dns answers contain an IP address?


Summary

In this lab we learned how to switch between Kibana Query Language and Lucene Search Syntax.

Visualizations

chevron-rightLab 4.1: Aggregation Based Visualizations - Data Tablehashtag

Lab 4.1: Aggregation Based Visualizations - Data Table

Objective: Learn how to create a basic data table visualization in Kibana.

Reference Material:


  1. Create a new Data Table visualization.

    1-1. Navigate to the Visualize Library tab under Analytics in Kibana and then select Create visualization

    create visualization

    1-2. Select Aggregation Based

    1-3. Select Data Table

  1. Set the Data View and Time Picker.

    2-1. Select the ecs-zeek-* data view

    2-2. Use the time picker to set the date range for Day 0.

  1. Configure the visualization options.

    3-1. Under Buckets, select the Add button and select Split rows

    3-2. Drop down the Aggregation menu, scroll down the list and select the Terms option.

    3-3. Drop down the Field menu and type source.ip to select the field for all originating IP addresses. Change the Size to 10

    3-4. Select the Update button to update the visualization with the new settings. Notice that when you change the Size and order by Descending, this visualization will only list the the top 10 results by count for the field chosen. This visualization in particular lists the top 10 originating IP addresses in the Zeek logs during the time frame selected.

    To keep these changes, select Save to save the visualization for later use. Name the Visualization LAB - Count of Top 10 Source IPs. Under Add to Dashboard, select None for this visualization, and future lab visualizations, then select Save and Add to Library


Summary

In this lab, we learned how to create a basic data table visualization that lists the top 10 originating IP addresses.

Lens

chevron-rightLab 5.1: Lens Visualizations - Create a Lens Visualizationhashtag

Lab 5.1: Lens Visualizations

Objective: Learn how to make basic visualizations with Lens

Reference Material:

This lab will use Lens to build a horizontal bar chart showing the Top 10 destination IP addresses broken down by destination port.


  1. Create a new Lens visualization.

    1-1. Navigate to the Visualize Library tab under Analytics in Kibana and then select Create visualization.

    create visualization

    1-2. Select Lens.

    choose Lens

  1. Set the Data View and Time Picker.

    2-1. Check your date is set to Day 0, the data view is set to ecs-zeek-* and the visualization type is Bar vertical stacked

    check Date and index pattern

  1. Configure the visualization options.

    3-1. Drag the destination.ip field from the field list into the center workspace.

    drag destination.ip into workspace

    3-2. Look at the Horizontal axis and Vertical axis values in the Layer pane on the right. Lens has autopopulated the values based on the chosen field. Explore the Suggestions beneath the visualization. Click through a few to see what alternative visualizations Lens suggests

    After exploring some of the suggestions, return to the Bar vertical stacked visualization.

    check suggestions

    3-3. In the Layer Pane:

    • Select the Top values of destination.ip under Horizontal axis

      • Change the Number of values to 10

      • Select Advanced and deselect the Group remaining values as "Other"

      • Change the Appearance Name to Destination IP

      • Close the Horizontal axis pane

    horizontal axis

    3-4. Change the visualization type by selecting Bar vertical stacked from the center pane. This will open a dropdown. Select Bar horizontal stacked

    change visualization type to horizontal stacked

    3-5. The data can be broken down further to give additional insight. Drag the destination.port field to the Breakdown field of the Layer Pane

    drag destination.port to Break Down By

    3-6. Examine the chart legend to see what port values Lens has chosen. Lens has set the function of the breakdown field to intervals. Actual port values will be more useful

    • Select destination.port under the Breakdown field

      • In the Breakdown pane select the function of Top values

    Change to Top Values

    3-7. When changed to Top Values, the Breakdown pane shows new settings:

    • Adjust the new settings

      • Change the Number of values to 5

      • Change the Appearance Name to Destination Port

      • Close the Breakdown pane

    3-8. The completed visualization should match the image below.

    3-9. To keep these changes, select Save to save the visualization for later use.

    3-10. Name the Visualization LAB - Top 10 Destination IP Addresses by Port. Under the section Add to dashboard, choose the option None. Save the visualization by clicking Save and add to library.


Summary

This lab introduced using Lens to quickly create and adjust a visualization.

chevron-rightLab 5.2: Lens Data Tablehashtag

Lab 5.2: Lens Data Table

Objective: Learn how to make data table visualizations with Lens

Reference Material:

This lab will use Lens to build a data table showing the Top 10 destination IP addresses with source and destination bytes.


  1. Create a new Lens visualization.

    1-1. Navigate back to the Visualize Library -> Create Visualization -> Lens

    choose Lens

  1. Set the Data View and Time Picker.

    2-1. Check your date is set to Day 0 (1 August 2018), the data view is set to ecs-zeek-* and the visualization type is Bar vertical stacked

    check Date and index pattern

  1. Configure the visualization options.

    3-1. Drag the following fields from the field list into the center workspace in the following order:

    • destination.ip

    • source.bytes

    • destination.bytes

    Note: If fields are dragged into the workspace in a different order than listed above, your visualization will be incorrect

    drag destination.ip into workspace

    3-2. Examine the visualization Lens has created. Notice that Lens has populated all three fields in the Vertical Axis and has used a Median as the Metric for both source and destination bytes

    This data will be easier to view as a table. Select Bar vertical stack above the main workspace and change it to Table

    change type to data table

    3-3. Now the fields need to be adjusted. From the Layer Pane on the right, select Top values of destination.ip

    • Change the following:

      • Change the Number of values to 10.

      • Select on Advanced and deselect the Group remaining values as "Other"

      • Change the Appearance name to Destination IP

      • Close the Row pane

    change rows

    3-4. Lens has defaulted the bytes values to show Median. For this visualization, Sum will be more helpful

    • Under Metrics:

      • Select Median of source.bytes to open the Metric pane

      • Change the Functions to Sum and the Appearance name to Source Bytes

    change metrics

    3-5. Staying in the Metric pane, select the Summary Row dropdown and choose Sum. This will give a sum for the column beneath the table.

    summary row

    3-6. Continuing in the Metric pane, find Color by value and select Text. This will change the color of the text based on threshold values. A Color field will appear to allow customization. Select Edit and change the Color palette to Status using the drop down menu. Select Back, and then Close to exit the Metric pane.

    Change color

    3-7. Perform the same steps for the Median of destination.bytes metric field.

    • Change the Function to Sum and the Appearance name to Destination Bytes

    • Select the Summary Row dropdown and choose Sum

    • Find Color by value and select Text

    • Edit and change the Color palette to Status

    3-8. The completed visualization should match the image below.

    3-9. To keep these changes, select Save to save the visualization for later use.

    3-10. Name the Visualization LAB - Top 10 Destination IP With Source and Destination Bytes. Under the section Add to dashboard, choose the option None. Save the visualization by clicking Save and add to library.


Summary

This lab introduced using Lens to quickly create and adjust a data table visualization.

chevron-rightLab 5.3: Lens Visualizations - Multi-Layer Date Histogramhashtag

Lab 5.3: Lens Visualizations - Multi-Layer Date Histogram

Objective: Learn how to use Lens to create a date histogram with multiple layers

Reference Material:

This lab will use Lens to build a date histogram from Zeek and Suricata record counts.


  1. Create a new Lens visualization.

    1-1. Follow the steps from the previous labs. Check the following: - Date is set to Day 0 (1 August 2018) - Data view is set to ecs-zeek-* - Visualization type is Bar vertical stacked

  1. Configure the visualization options.

    2-1. Drag the Records field into the Workspace. Lens will create a Date Histogram with the record count.

    2-2. Adjust to show only the records from 1 August 2018 - 11:00:00.000 to 12:00:00.000. This can be done in the time picker or by zooming in the visualization. The X-axis should show @timestamp per minute.

    add layer

    2-3. On the layer pane to the right of the visualization, select Add layer then select. Visualization.

    add layer

    2-4. Once the new sub menu is expanded, select Bar vertical stacked for the visualization type. This will add a second visualization layer that is independent from the first.

    add layer

    2-5. In the new layer, change the Data view to ecs-suricata-*

    change data view

    2-6. With the ecs-suricata-* data view selected, drag the Records field into the workspace.

    2-7. Lens stacks the Suricata data with the Zeek data because the visualization type is set to Bar vertical stacked. It will be more useful to see this data separately so the Suricata and Zeek counts can be compared.

    Change the Layer visualization type of the Suricata layer to Line.

    change to line chart

    2-8. The chart now shows Zeek records as the green bar chart and Suricata records as the blue line chart. Note the visualization legend shows Count of Records for each data set, making it difficult to understand the chart.

    Make a couple of adjustments to make the chart more readable. Select the Count of Records in the Zeek layer and change the Appearance name to Zeek. Change the color so the colors are more distinguishable from each other.

    change name and color

    2-9. Update the Appearance Name and Series color of the Suricata layer using the same steps.

    2-10. The completed visualization should match the image below.

    2-11. To keep these changes, select Save to save the visualization for later use.

    2-12. Name this visualization LAB - Zeek and Suricata Records. Under the section Add to dashboard, choose the option None. Save the visualization by clicking Save and add to library.


Summary

This lab introduced using Lens to create a multi-layered date histogram using two different data views.

Dashboards

chevron-rightLab 6.1: Utilizing Dashboards - Creationhashtag

Lab 6.1: Utilizing Dashboards - Creation

Objective: Learn how to create a dashboard using the visualizations you have already created.

Reference Material:


For the following exercises, make sure to adjust the time picker by selecting the Calendar, then Day 0 under Commonly Used.

  1. Create a new Dashboard.

    1-1. Navigate back to the Dashboard tab under Analytics in Kibana. Observe the available pre-built Dashboards. We will look at these Dashboards further in the next lab. Select Create a Dashboard

  1. Configure the Dashboard by adding visualizations.

    2-1. When you create a Dashboard, you start out in Edit mode. Start editing by adding a visualization. Select Add from library

    2-2. Using the search bar, search for "LAB".

    2-3. Add the four visualizations that you created in the previous labs by selecting them from the search results. Organize the layout of your Dashboard by hovering over the title of a visualization and selecting drag-and-drop. The arrow in the bottom right corner can be used to resize visualizations.

    There are quite a few visualizations here that show an abundance of results. Next, let's pare down the information using filters

    2-4. Write and apply the following query, so that it will persist when the Dashboard is saved.

    source.ip : 172.16.100.0/24

    Notice the amount of results is now less. These results are focused to show only traffic originating from the internal network.

    2-5. You now have Unsaved Changes. Select the blue Save button.

    2-6. Name your newly created Dashboard My LAB Dashboard and save it.

    2-7. If you would like to rename your dashboard and save a new version, use Save as. At this point, you are still in Edit mode. Select Switch to view mode.


Summary

In this lab, we learned how to configure and add visualizations to a dashboard.

chevron-rightLab 6.2: Utilizing Dashboards - Analysishashtag

Lab 6.2: Utilizing Dashboards - Analysis

Objective: Learn how to utilize Dashboards, Filters, and Searches to carry out a threat hunt.

Reference Material:

For the following exercises, make sure to adjust the time picker by selecting the Calendar, then Day 0 under Commonly Used.


Using Dashboards to Hunt

During this lab, we will learn how to leverage dashboards for threat hunting.

Zeek Connections Dashboard

  1. Next we will be exploring the use of dashboards to hunt. First, navigate to the main dashboard screen.

    1-1. The first dashboard that we will look at (and starting point for most threat hunters) is the Zeek Connections Dashboard. To find this dashboard, search for the term Zeek or filter by the NSM Operator tag. Select the Zeek Connections Dashboard.

    Take a look at some of the key visualizations in here and what the data is actually telling you.

    • Top Orig Bytes and Top Resp Bytes can be very useful to find spikes in data being sent

      • Use this in conjunction with the Log Count over Time visualization and you may be able to find a lot of data being sent with less connections than you might expect

    • This cluster of visualizations can tell us a lot

      • It is particularly useful for finding if a top talker is not a top sender of data

    • The Top Long Running Connections visualization is essentially just a saved search, where the duration field is set to > 900. Long connections can be a very easy win for us as threat hunters

    • Finally, at the bottom we see the Zeek Conn Log Search visualization. This is, once again, a saved search, however it has no searches/filters/etc on it. Using this will likely prove more efficient than going back to the Discover tab to look at specific log data

Zeek HTTP Dashboard

  1. Now we will go take a look at the HTTP dashboard.

    2-1. Scroll to the top of your Zeek Connections Dashboard and look for the navigation panel that says CONN DNS FILES ... and select HTTP

    2-2. Let's take a look at some more key visualizations.

    • The Top Resp Ports visualization is a very easy win for us since it makes finding anomalies so easy

      • We also have a wordcloud visualization that shows the uncommon HTTP ports

    • The Top Request Methods chart also is very important

      • Keep in mind we should mostly be seeing GETs and POSTs. Any DELETEs will be cause for further investigation.

    • Beyond this we have some simple data tables like Top HTTP Hosts, Top Referrers, Top URIs, and Top User Agents that will all become more useful as we filter down in our investigations

    • As with the Zeek Connections Dashboard, we have the Zeek HTTP Log Search at the bottom so you don't need to go to the Discover tab

Playing with Filters

  1. Now we will apply some filters to a dashboard.

    3-1. Lets go back to the Zeek Connections Dashboard and ensure our time picker is set to Day 0. Remember, as with sci-fi movies, playing with time can give unexpected results, so always check that you're in the right time frame.

    3-2. Next we will be creating and modifying a filter using the filter creator. Select + Add filter which, can be found next to the query bar.

    Add filter

    3-3. First thing we need to do is choose a field to filter on. Lets start typing destination.port; you'll notice Kibana serving you suggestions as you type.

    3-4. Next, look at the drop down for operators. Some examples of the options are as follows:

    Operators and What They Do

    In this case, we will use the is operator.

    3-5. Next, enter a value, for this lab, we will use 443.

    Add 443 filter

    3-6. Before you save the filter, select "Create custom label". With this you can, as the name suggests, have a custom label rather than the filter just showing the query syntax. Enter 443 for the Custom Label and select Add Filter.

    3-7. You will notice, if you scroll down, all your visualizations have changed. This is the power of Dashboards! Filters apply to every visualization, and you can alter and view your data in different ways all at once, rather than individually applying filters to different visualizations.

    3-8. Now lets say we want to see all traffic EXCEPT where the destination.port field is equal to 443. Rather than create a new filter, we can alter the one we have. Select the box labeled 443 (or whatever you labeled your filter as) and lets look at our options.

    Edit Filter Options

    3-9. We can see that we can edit the filter, but if we just want to see results where destination.port is not 443, we can simply select Exclude results. Note that the filter box now says NOT 443 and is red now, as well as all our visualizations have updated data now.

    3-10. Now lets take a look at all our data, unfiltered; however, we want to keep the filter available for use, without needing to re-create it later. If you select the filter box again, then select Temporarily disable you can show your data as if that filter was not present, then re-enable it whenever you need it again.

    3-11. Lets invert the filter again (getting back to data where destination.port is equal to 443) and re-enable the filter. Our Zeek Conn Logs Count visualization at the top should read 13,145.

    Zeek Conn Log Count

    3-12. Now lets say we want to get into some real analysis: we want to look at HTTP traffic where the destination.port field is equal to 443. With the new navigation panel, we can simply pivot to another dashboard with this filter applied.

    3-13. Now select HTTP from the Dashboard navigation visualization. Note that the filter is still present in the new dashboard. This is another very useful feature because it allows us to keep the same parameters of search across any of our data views, dashboards, and searches.

    Note: If we want to pivot to Discover instead, we have to Pin our filter and it will follow us wherever we go in Kibana (anywhere that displays our data, that is). We would select the filter again, and select Pin across all apps.

    3-14. Now lets edit the filter to match where destination.port is equal to 80. Select your filter, then click Edit filter.

    3-15. Change the Value field to 80 and make sure to change our custom label as well.

    3-16. Now we are familiar with manually creating and modifying filters. Select the filter again, and select Delete.

Filtering Part 2: Electric Boogaloo

  1. Let's learn more about applying filters to a dashboard.

    4-1. Head back over to the Zeek Connections Dashboard.

    4-2. Scroll down to the Top Application Protocols bar chart visualization.

    Top Services

    4-3. Select the bar labeled tls (the most common service). Notice how a filter is automatically. applied.

    4-4. Scroll back up to the Top Resp Ports data table visualization.

    Top Resp Ports

    4-5. Hover over 443 and take note of the 3 buttons that appear above the value. Selecting the plus (+) icon will filter for that value, selecting the minus (-) icon will filter out that value, and selecting the two arrow button will expand on the data and still give you the options to filter for and filter out. Lets filter out that value by selecting the (-) button.

    4-6. Now take a look at the Top Resp Hosts data table visualization.

    Top Hosts

    4-7. Hover over 104.207.159.249 and select the plus (+) button to filter for that value.

    4-8. Select the SSL/TLS dashboard button from the dashboard navigation bar.

    Notice that the Zeek SSL/TLS Log Count visualization shows two entries.

    4-9. Temporarily include results for destination.port: 443 and notice that the visualization now changes to zero. Include the results once more by hovering over the filter and selecting Exclude results.

Zeek SSL/TLS Dashboard

  1. Lets look at some key fields in the Zeek SSL/TLS Dashboard and explain how they can be useful in hunting.

    • The SSL/TLS Negotiation Status pie chart is nice to see a quick glimpse at whether or not we are actually getting encrypted traffic

    • The Top Requested Server Names is essentially the website being accessed. Take note of the domain being accessed here

    • The SSL/TLS Validation Status chart shows what status message we got when verifying the integrity of the TLS certificate. It is alarming that we see "Unable to get local issuer certificate" here since we got a success message on the negotiation status in this case

    • The Top Server Issuer and Top Server Subject data tables are very nice to looking at a field that should have a standard format. In this case, we see some results that do not follow the convention that is required. This should set off some more alarms for us

    • Finally, as with the HTTP and CONN dashboards, we have a saved search at the bottom

    5-1. Lets say we wanted to look at the PCAP associated with these results. Follow these steps:

    • Select the two arrow button in the saved search visualization to expand the log

    • Scroll down until you see a field called pcap.query

    • You can select the Query PCAP button to pull the raw pcap for this SINGLE connection


Summary

In this lab, we learned how to analyze data across various dashboards.

Security App

chevron-rightLab 7.1: Security App - Getting Started with the Security Apphashtag

Lab 7.1: Getting started with the Security app

Objective: Become familiar with the Security app.

Reference Material:


  1. Navigate to the Overview dashboard in Elastic Security

    1-1. From the Kibana Home page, select the Security button.

    1-2. From the Get started with Security page, select Dashboards.

    1-3. Displayed are several default dashboards, select Overview.

  1. Adjust the timepicker to view Alert trends

    2-1. In the Commonly used panel, select BLISTER.

  1. Review the Overview dashboard

    On the Overview dashboard, we can see several visualizations representing alerts and events in our data. We can also see our Recent cases, Recent timelines, and a section for Security news published by the Elastic Security Labs teamarrow-up-right.

    We should see a number of alerts in the Alert trend visualization. This confirms that there are alerts available for investigation.

  1. Review the Alerts page

    4-1. In the Alert trend visualization select View alerts

    The Alerts page offers various ways for you to organize and triage detection alerts as you investigate suspicious events.

    From the Alerts page, you can filter alerts, view alerting trends, change the status of alerts, add alerts to cases, and start investigating and analyzing alerts. Notice that we are seeing only the open alerts based on the Status filter.

  1. Review the Rules page

    5-1. In the Alerts page, select Manage rules

    In the Rules page, we see the detection rules installed in our Elastic Security app. These prebuilt detection rules are maintained by the Elastic Security Labs teamarrow-up-right.

    Note: The installed rules you see were preconfigured with your Elastic lab environment. Elastic users would need to install and enable rules in their own environments.

    Each of these rules run some form of query logic against our data. As those queries return valid results, alerts are generated within a system index specific for the Security app. These prebuilt rules are what powers the Security app out of the box however, they are optional. We can easily create our own rules in the detection engine as well. Once an alert is generated, an analyst can use different tools to triage the alert.

  1. Review the Timelines page

    The Timelines feature is the primary tool for alert analysis. You will be practicing how to use this tool throughout the course.

    6-1. Select Timelines to open the Timeline feature.

    Note: The Timelines tool appears several places within the Security app to support analyst workflows.

    6-2. Navigate back to the Alerts page.

  1. Add an alert to a Timeline

    We will practice building a timeline to demonstrate how an analyst could start an investigation within the Elastic Security App. Let's add some alerts to a timeline to start this investigation.

    7-1. Hover over the RDP (Remote Desktop Protocol) from the Internet alert in the Alerts by name table

    7-2. In the pop-up panel, select the Add to timeline investigation button.

  1. Review the Untitled Timeline

    8-1. At the bottom of the page, select Untitled timeline

    In the Timeline we can see that the filter is matching on the kibana.alert.rule.name field.

    When we selected the button it applied a query for kibana.alert.rule.name: "RDP (Remote Desktop Protocol) from the Internet" into our Timeline! Now all of the results for that query are available to be searched through.

    Without any investigation, we can tell this alert is informing us that we have at least one external host attempting (or succeeding) to connect to at least one internal host in our environment.

    Usually RDP from the Internet can be a misconfiguration if not properly scoped with firewall rules or security groups in cloud environments.

    Next, we will attach this to a case so that we can track, share and work this case in the future. In this case, these events were generated in our lab environment running on Google Cloud (GCP).

  1. Save the Timeline and attach it to a new Case

    9-1. Select Save in the upper right corner.

    9-2. Update the title of this Timeline to RDP Connections over the Internet (GCP).

    9-3. Select Save.

    We can now attach the Timeline to a Case.

    9-4. Select Attach to case -> Attach to new case.

  1. Create a new Case from the Timeline

    In the Create case page, update the Case fields section with the following information:

    10-1. Name the case Investigating Suspicious RDP Connections

    We now have an open Case for additional triage and investigation.

We have one more area of the Security app to explore before we complete this lab.


Summary

In this lab, we learned how to become familiar with the Security app. We viewed the default dashboards, found alerts in our BLISTER time period, reviewed the included detection rules, added an alert to a Timeline for investigation, then added the Timeline to a Case we created.

chevron-rightLab 7.2: Security App - Explore Pageshashtag

Lab 7.2: Security App - Explore Pages

Objective: Utilize the Explore pages to learn about and interpret the data provided by the Security App for data.

Reference Material:


The Explore pages in the Security App provide a high level overview of our data in several contexts. It is important for us to understand the differences between each because it will manifest in the Security App in different ways.

chevron-rightLab 7.2 Part 1: Explore Hostshashtag

Lab 7.2 Part 1: Explore Hosts

Objective: Utilize the Explore pages to learn about and interpret the data provided by the Security App for data.

Reference Material:


Return to the intro of Lab 7.2arrow-up-right


  1. Navigate to Explore -> Hosts

    1-1. Select Explore in the Security menu or by expanding the options widget

    1-2. Select Hosts

    navigate-security-app-hosts

  1. Set the time filterarrow-up-right to BLISTER and remove any filters you may have

    2-1. Select the Calendar icon

    2-2. Remove any filters you may have

  1. Observe the Hosts page, for an overview of all hosts and host-related security events

    This page displays data from a host context and there are a number of prebuilt visualizations and data tables we can utilize to gain high-level information about our hosts.

    We are able to see that 13 hosts were identified between February 1st, 2022 and March 31st, 2022 with a number of different operating systems, and unique network connections.

  1. Review the customizable Events table and the Events histogram

    4-1. Select Events

    Using the Events histogram we can filter the events by their stackable fields

    • event.action

    • event.dataset

    • event.module

    Now we will adjust the field shown in the Events table and review the available datasets.

    4-2. Select Stack by and change the field to event.dataset if it isn't selected already.

    Now we are able to review the datasets available using the histogram

    4-3. Hover over the histogram to determine what datasets are available. Beneath the histogram is an events table that represents all the events from our hosts.

    There are a number of actions we can take from these stackable field values such as:

    • Filter for

    • Filter out

    • Add to timeline

    • Copy to clipboard

  1. Review the Uncommon Processes table

    5-1. Select Uncommon Processes

    Uncommon Processes table displays uncommon processes running on hosts, aggregated to show process names with distinct values

    The table is broken down by the following fields:

    • Process name

    • Number of hosts with that process

    • Number of instances of that process

    • Host names with that process

    • The last command issued to run that process

    • The last user to run that process

  1. Filter the endpoint.events.process events

    7-1. In the histogram field list, look for the endpoint.events.process dataset. You may have to scroll in the legend to find it.

    7-2. Next to the endpoint.events.process entry, select the triple dots.

  1. Adjust the field shown in the Events histogram and identify process start events in the event.action field

    8-1. Using the Events histogram, change the stackable field to event.action to observe what actions are associated with these process events

    8-2. Filter for the start events. Select the triple dots next to start then select Filter for.

    This filter will display process start events from our hosts.

    Any of these events can be expanded by selecting the arrow under Actions and selecting the arrow expand box.

    Additionally, we can add the fields we want and sort the columns but we will explore better ways to do that in the future.

  1. Review the event.action: start event information

    During this step we will expand the first entry in the Events table.

    9-1. In the row with the first entry, select the View details button.

    The Event details table displays a list of the event fields and values.

    Here we can scroll through the event fields or search for fields of interest.

    Since we already know this is returning documents relating to processes starting, we can run a search for process fields.

    9-2. Type process in the search field of the flyout pane and observe the various fields.

    There are a number of fields that provide a lot of context to these process events

    • process.name: Process name, sometimes called program name

    • process.command_line: Full command line that started the process, including the absolute path to the executable, and all arguments

    • process.args: Array of process arguments, starting with the absolute path to the executable

  1. Review the Host Risk table

    10-1. Select Host risk. Note that the Host risk score is not enabled by default or during this lab.

    This table will provide scoring from Entity Analytics to calculate a risk score about hosts that are present.

  1. Review the Sessions table

    11-1. Select Sessions

    When enabled, this table will provide any session data captured from Elastic Defend on Linux, macOS, and any containers running in the environment.

chevron-rightLab 7.2 Part 2: Explore Networkhashtag

Lab 7.2 Part 2: Explore Network

Objective: Utilize the Explore pages to learn about and interpret the data provided by the Security App for data.

Reference Material:


Return to part 1 of Lab 7.2arrow-up-right

Return to the intro of Lab 7.2arrow-up-right


  1. Navigate to Explore -> Network

    1-1. Remove any filters you may have and navigate to Explore -> Network

    1-2. Select Day 1 on the timepicker


There are several metric visualizations displayed in this view that can tell us some high-level information about the network such as the number of network events, unique private IPs, DNS queries and more.


  1. Observe network flows and drilldown on a specific IP address

    2-1. Above the Events table, select Flows

    2-2. Under Source IPs in the Flows table, sort the table in descending order by Bytes in and select 103.69.117.6

    This will give us a drilldown view for this IP itself. Additionally we can run a series of lookups from this page.

  1. Review the DNS table

    3-1. From the Network page, select DNS.

    Here we can observe the top DNS domains including the total number of queries, unique sub-domains, DNS bytes in and DNS bytes out.

  1. Review the HTTP table

    4-1. Select HTTP

    Here we can observe information about HTTP requests, such as request method, URL path, HTTP status code, the last source IP address that made the HTTP request, and the number of requests.

  1. Review the customizable Events table and the Events histogram

    5-1. Select Events to switch back to the customizable event table

    Here we can observe events and even alerts generated by our network tools without using the detection engine. This is perfect for making sure our data is feeding in correctly, and validating that we have the right fields. In this case we have both Zeek and Suricata configured to log certain entries in their logs to event.kind: alert. Other tools can be configured this way as well such as a firewall or honeypot.

    5-2. Add a filter for event.module: suricata and observe the message field.

    The data in the message field is the signature or rule category from Suricata.

  2. Observe external alerts

    6-1. Select Show only external alerts

    This view displays any event with event.kind: alert

chevron-rightLab 7.2 Part 3: Explore Usershashtag

Lab 7.2 Part 3: Explore Users

Objective: Utilize the Explore pages to learn about and interpret the data provided by the Security App for data.

Reference Material:


Return to part 2 of Lab 7.2arrow-up-right

Return to the intro of Lab 7.2arrow-up-right


  1. Observe the Users page

    1-2. Select Users under Explore

    navigate-security-app-users

    This page is primarily focused on users active in the environment and any authentication information we can glean from distinct values across our data.

  1. Set the timepicker to BLISTER

    2-1. Verify the time filterarrow-up-right is set to BLISTER, which is between February 1st, 2022 and March 31st, 2022.

    2-2. Remove any filters you may have.

  1. Review Authentications

    3-1. Select the Authentication tab

    Here we can see authentication success/failure rate across our environment.

  1. Create filters to show user activity

    4-1. Set filters for event.dataset: endpoint.events.file and event.action: deletion.

    security-app-users-event-filtered

    This will display files that have been deleted by our users. We could further filter down this data by selecting a user name, and toggling columns displayed to show the names of files that were deleted.

Next we will look at User Risk

  1. Review the User Risk table

    5-1. Select User risk. Note that the User risk score is not enabled by default or during this lab.

    Similar to Host risk, we can run Entity Analytics on our data to identify users that might be associated with a number of alerts, candidates for anomalies such as impossible travel and more.

chevron-rightLab 7.3: Security App - Detection Rule Typeshashtag

Lab 7.3: Security App - Detection Rules

Objective: Interpret the differences between detection rule types. In this lab we will examine three different rule types.

The Elastic Security app detection engine can be loaded with over 1000+ detection rules across various data sources like cloud, SaaS, and endpoint. These rules, running on a schedule, generate alerts for incident response or as starting points for investigation. They not only produce alerts but also offer guidance and insights into the reasons behind each detection rule.

Reference Material:


  1. Observe a KQL detection rule

    1-1. Navigate to Rules -> Detection Rules (SIEM)

    1-2. Run a search for Telnet in the rule search bar and select Accepted Default Telnet Port Connection

    This rule detects network events that may indicate the use of Telnet traffic.

  1. Examine detection rule sections

    2-1. Examine the About section to learn more about this rule

    2-2. Examine the Definition section

    The rule type for this rule is Query which indicates that it uses Lucene or KQL.

    The query that will run is looking for any successful connection with a destination port of 23.

  1. Validate query results

    3-1. Copy the custom query from the rule and navigate to Explore -> Network

    3-2. Paste the query in the search bar and change the timepicker to Day 1

    The network map should be empty but there will be 24 network events total

    There are valid results for our query from the detection rule which means if this detection rule was running, it would've generated alerts for us to investigate.

    3-3. Navigate back to the detection rule

    The custom query will run against the following index patterns by default: auditbeat-*, filebeat-*,packetbeat-*,logs-network_traffic.*. If we scroll further under Schedule we can see that the query will run every 5m with a look-back time of 4m

    3-4. Examine the rule schedule

    Looking at the schedule we can determine that the query will run every 5 minutes with a look-back time of 4 minutes.

    To recap, the custom query runs against defined index patterns every 5 minutes. If there is a valid query result, an alert is generated in the Security application's alerts index. In this case, the process is unable to be completed, as the rule is disabled, preventing it from firing, and the index patterns are undefined.

    To use the rule with our own indices, duplication is required. The Additional look-back time is set to 4 minutes for this rule. This ensures there are no missing alerts when a rule doesn't run exactly at its scheduled time.

  1. Duplicate the rule

    4-1. Select the stacked dots and click Duplicate rule

    A window will appear giving you the option to duplicate the rule with or without exceptions present or active.

    4-2. Select The rule and its exceptions, then Duplicate

  2. Create duplicated custom rule

    5-1. Add our ecs-zeek-* index pattern and press Enter Note: Optionally select About to give the rule a new name if you want and select Save changes

    5-2. Toggle the Enable button

    Now if we observe any more traffic on destination port 23 over TCP this rule will fire!


  1. Observe an EQL detection rule

    6-1. Navigate back to Rules --> Detection rules (SIEM) and search for MsBuild Making Network Connections. Select the rule to view

    This is an Event Correlation rule, which uses EQL (Event Query Language). It uses a sequence of events for correlating the Microsoft Build Engine making network connections as a defense evasion technique. Events that fall outside of the scope for this query will not trigger an alert.

    One caveat with this rule is that if we want to query our data with EQL in the Security Solution we need to use a Timeline.

  1. Create a Timeline

    7-1. Select Create new timeline to open up a blank timeline that we can enter a query into using EQL.

    7-2 Select the Correlation tab in Timeline

    EQL uses event.category field as a way to stack events and return results. We must define this first in any EQL query unless we are using a sequence like what was defined in the rule we observed.

    7-3. Type network where true. This will return all network events on Sep 1 2018

    Lets craft a query that defines a sequence to be utilized in a detection rule to look for interesting traffic, particularly, sequenced events for internal clients trying to resolve what appears to be "www.facebook.com."

    7-4. Copy the below query and paste it into Timeline

    sequence by source.ip [network where destination.ip == "172.16.100.1"] [network where dns.question.name == "www.facebook.com"]

    Correlation is showing us events that match in a sequence. Blue events match one sequence in the query and red events match in another.

    We will cover Timelines in a later lab, however this visually depicts what the Security Solution is doing for us when we utilize a sequence in a detection rule.

    Now that we have examined a few detection rules, let's go through the process of creating a brand new rule.

  1. Generate some data for a detection rule

    8-1. Navigate to the Windows Endpoint in Strigo

    8-2. Open a Command Prompt by typing cmd.exe into the search box

    8-3. Type whoami.exe

    Whoami is both a process and command that can give an atacker user information such as the current logged in user. This is useful to an attacker for many reasons however the most common is if they have a remote access to a compromised system they can easily identify if they need to elevate privileges on that system if that is necessary to accomplish their goals.

  1. Create a new detection rule for our whoami activity

    9-1. Navigate to the Rules page and select Create new rule

    KQL is already preselected for our rule type so all we need to do is define our query.

    9-2. Type process.name: whoami.exe the query bar

    Before we create the rest of the rule we can test it with Rule preview to make sure the query does what we want it to do.

    9-3. Select Rule preview to preview the results of the query

    9-4. Leave the preview timeframe set for Last 1 hour and select Refresh

    We can see that the rule preview successfully returns results. Optionally we can expand the events here by clicking the arrows.

    9-5. Scroll down and select Continue

    9-6. Enter the following for the About section of the rule:

    • Test rule - Whoami Process Activity in the Name field

    • Test for execution of the whoami process into the Description field

    • student in the Tags field

    9-7. Leave the Schedule rule options the same and select Continue

    9-8. Do not select any Rule actions and select Create & enable rule

    After a few minutes there should be alerts for the activity. It's worth mentioning that this rule might be noisy in a production environment for various reasons such as RMM software that invokes whoami to gain information for inventory purposes or IT admins that want to make sure they are in the right context at any given time. Given that our rule was very simple we can examine one that was designed to catch some of those false positives.

  2. Compare detection logic

    10-1. Navigate back to Rules -> Detection rules (SIEM)

    10-2. Search for Whoami and open the rule named Whoami Process Activity

    Notice the key differences in this rule are more appropriately scoped for production environments starting with the event type and including some more in depth metadata around the rule.


To conclude this lab we will practice Elastic's newest query language, the Elasticsearch Query Language (ES|QL).

  1. Observe an ES|QL detection rule

    11-1. Return to the Rules page and search for Malware Infection

    11-2. Select the rule Potential Widespread Malware Infection Across Multiple Hosts

    11-3. Identify the Definition field in the rule description

    Notice, the Rule Type for this rule is ES|QL.

    This rule is designed to look for three or more unique hosts with endpoint alerts where the event.code field in the endpoint alert contains one of the following values "malicious_file", "memory_signature", or "shellcode_thread".

    Simply said, the rule uses endpoint alert data to determine when a malware signature is triggered on at least three unique hosts. Let's practice using this query against our BLISTER data set.

  1. Run the ES|QL query

    12-1. Copy the ES|QL query from the Potential Widespread Malware Infection Across Multiple Hosts rule to your clipboard.

    12-2. Select Untitled timeline

    12-3. Select the ES|QL tab on the top of the Timeline menu.

    12-4. Paste the ES|QL query from your clipboard into the query box.

    12-5. Select BLISTER in the Date quick select.

    You should see the following output: "No results match your search criteria".

    Good! It does not look like we have evidence of a potential widespread malware infection. However, we will modify our query to search if there is evidence of at least one infected host.

  1. Modify ES|QL query

    13-1. Modify the integer 3 to 1 on the last logical statement of the query. Your query should now look like this:

    from logs-endpoint.alerts-* | where event.code in ("malicious_file", "memory_signature", "shellcode_thread") and rule.name is not null | stats hosts = count_distinct(host.id) by rule.name, event.code | where hosts >= 1

    We can now see evidence of three alerts for three separate malware rules.

    The stats...by function in this query is a powerful example of the ES|QL ability to aggregate values in a query. In our example query, the stats...by function grouped unique host ids by the rule name and event codes of our endpoint alerts.

    Let's add another field to our query to help us better understand the relationship of our data.

  1. Better understand the ES|QL query logic

    14-1. Add the field host.name to the end of the line with the stats...by function. Your query should now look like this:

    from logs-endpoint.alerts-* | where event.code in ("malicious_file", "memory_signature", "shellcode_thread") and rule.name is not null | stats hosts = count_distinct(host.id) by rule.name, event.code, host.name | where hosts >= 1

    We can now see more clearly the relationship of events produced from the query represented in the rule Potential Widespread Malware Infection Across Multiple Hosts.

    Let's look at one more ES|QL example.

  1. Examine more complex queries

    15-1. Copy and paste the below query:

    from logs-endpoint.events.process-*, logs-system.security-* | where host.os.family == "windows" and event.category == "process" and event.action in ("start", "Process creation", "created-process") and to_lower(process.name) in ("cmd.exe", "powershell.exe", "conhost.exe") and (starts_with(to_lower(process.parent.executable), "c:\\windows\\system32") or starts_with(to_lower(process.parent. executable), "c:\\windows\\syswow64")) | keep process.name, process.parent.name, host.id | stats hosts = count_distinct(host.id), cc = count(*) by process.parent.name | where cc <= 10 and hosts == 1

    This query demonstrates some more functionality of ES|QL such as:

    • to_lower to lowercase process.name

    • starts_with to denote the begining of process.parent.executable

    The query itself is a hunting technique to look for unusual Microsoft native processes spawning cmd.exe, powershell.exe or conhost.exe on a unique host.


Summary

In this lab, we learned about three different detection rule types. We created and tested our own custom rule. We also expiremented with modifying an existing alert query to enrich our results.

chevron-rightLab 7.4: Security App - Alertshashtag

Lab 7.4: Security App - Alerts

Objective: Investigate alerts in the Security app from data produced by Elastic Defend and ingested into Elasticsearch.

Reference Material:


The purpose of a detection rule is to generate alerts for an investigation regardless of the use case. In this lab we will investigate alerts, generated during the BLISTER time period, within the Security app. We will review a detection rule and investigate two different alerts.

  1. Observe the Endpoint Security detection rule

    1-1. Navigate to Rules -> Detection Rules

    Elastic has a default detection rule enabled by default called Endpoint Security that runs a query for event.type: alert and event.module: endpoint, so that any data we have with that field value automatically generates Defend related alerts in the Security app.

    1-2. Search for Endpoint Security.

    1-3. Examine the override settings in the Endpoint Security rule.

    There are a number of field values being overriden with different values such as event.severity and event.risk_score, however the settings we are interested in are Rule name and Timestamp. The rule name will be automatically populated by the event message but more importantly the timestamp will be overriden by event.ingested.

    This means that if we have data from previous years/months, well beyond our schedule, an alert will be generated near real time. This makes using the Security application and detection rules valuable for incident response as well.

    Since this rule is enabled by default, there is no further action needed with it. Now that we have reviewed the override settings for a detection rule, we can move on to investigating alerts.


  1. Execute a sample

    2-1. Navigate to the Windows Endpoint in Strigo.

    navigate-security-app-alerts

    2-2. Navigate to Desktop -> Training -> Prevention Samples -> Macro.

    2-3. Double click to open 5e4a52b4b095a82f4d39e98d420674388121ab2265f5df2bacf7fefe76ddf370.doc.

    2-4. Select Enable content.

    We will start to see Defend notify us that it is preventing various actions. After a few minutes we should see events populate in the Security app.

  1. Investigate alert context in Elastic Security

    3-1. Navigate to the Alerts page and ensure the timepicker is set to Today.

    3-2. Find the earliest alert and expand it.

    3-3. Scroll down to the Highlighted field section and observe the rule.description field value.

    This field contains the reason this alert fired. Elastic Defend observed Microsoft Word spawning a command shell with suspicious command line arguments.

    3-4. We can observe those command line arguments a number of ways. Scroll up on this event, select Table to look at the table view of the JSON document and search for process.command_line.

    There are a number of arguments here starting with cmd.exe /c powershell followed by a bunch of variable declarations and obfuscation. Before we pivot over to a Timeline let's open the Event Analyzer.

  1. Investigate alerts with Event Analyzer

    4-1. Navigate back to the alert and open Event Analyzer.

    The Event Analyzer visually describes the behavior that Defend observed. WINWORD.EXE spawning cmd.exe. It terminated the cmd.exe process once it was determined to be malicious. In addion we can see other forms of telemetry collected by Defend such as file, library, network and registry events leading up to this event

    4-2. Close the analyzer view

  1. Investigate alert in Timeline

    5-1. Select the Timeline icon next to the alert,

    Now that the alert is shown for investigation, add the parent process name WINWORD.EXE to observe the parent process related events by selecting Table and searching for process.parent.name and add this field to timeline for investigation.

    More events are shown on the timeline now. We can explain what has happened by looking at the event.category and event.action fields.

    Let's breakdown each event to describe what is happening:

    • The earliest event describes the cmd.exe process starting from the C:\Users\trainee\Documents directory

    • The second describes Elastic Endpoint firing an Malicious Behavior alert

    • The third event cmd.exe process is terminated as a result of the prevention

    • The final event is the result of the Endpoint Security SIEM rule firing to alert an analyst of the activity


Summary

In this lab we generated alerts and examined them via the alert flyout, Event Analyzer and Timelines. We also created a Timeline for investigating these events.

chevron-rightLab 7.5: Security App - Timelineshashtag

Lab 7.5: Timelines

Objective: Discover how to utilize Timeline for investigating events and alerts.

Reference Material:

Timeline is the primary investigative tool within the Security application primarily used for alert investigation. However, it can be used in a variety of ways depending on an analyst's workflow. In this lab, we will explore the different features in Timeline as well as how we can customize what we see to fit our needs.

We can open up Timeline from almost anywhere within Elastic Security and it will dynamically update as we explore our data.


  1. Create new Timeline

    1-1. Navigate to the Timelines tab under Security, select Create new timeline and set the timepicker to Day 1.

    For performance reasons, Timeline won't show us all of our events for Day 1. However knowing this, it is usually a good idea to have some query in mind when using a Timeline. Even when investigating alerts we will have some indicator, alert, hostname, or IP address to pivot from. Timeline supports four different query languages but for now, we will use KQL.

    1-2. Enter destination.ip: 172.16.100.0/24 AND NOT source.ip: 172.16.100.0/24 into the query bar to return events that originate from external IPs and connect to internal IPs.

    A number of events are returned, however we want to isolate events for Zeek and we only want to look at connections.

  1. Verify Data views

    2-1. Verify that we are looking for Zeek indices by selecting Data view and then click Advanced options.

    There are several index patterns listed here with the first one being .alerts-security.alerts-default. That index pattern is what allows us to see the triggered detection rule alerts in our Timeline. We should also see an index pattern called ecs-*.

    So long as the correct index patterns are available in the Security application we can explore the events.

  2. Create a filter in Timeline

    3-1. Add this filter: event.dataset: conn to isolate only our Zeek connection logs

    Timeline uses a concept called Event Renderers to explain events in a plaintext way that tells a story.

    "This host made a connection to this host and downloaded a file".

    It builds this context off of the event data. Each event render is different and is dependent on the data observed in Elastic Security. By hovering over the fields in the renderer we can observe their field values.

  1. Examine Event Renderers

    4-1. Select the gear button to observe the different event renderers.

    There are several event renderers per data type.

    4-2. Close the Event Renderer view

    Notice each row is an event. The fields in each event will only be populated if they exist in the data that is returned, otherwise they will be blank.

    4-3. Hover over the fields displayed to see their type.

    For simplicities sake we will customize the view to display this information in a tabled view.

    4-4. Select the gear button again and select Disable all to turn off event renderers for Zeek

    We now have a lot more screen real estate for these events but we did lose the fields the event renderer gave us. Next we will add them back.

  1. Customize Timeline

    5-1. Select the Fields button

    This is where we can see the default fields that the Security app adds to the Timeline by default. The Security app supports 40+ categories of events and over 1000+ ECS fields. We can select any of the categories/fields that are supported in the Security app or we can search for the fields that we want to add back in to our timeline.

    5-2. Add each of the following fields by typing the field name in the search bar and selecting it:

    • event.duration

    • network.bytes

    • network.packets

    • network.transport

    • source.packets

    • destination.packets

    5-3. By selecting the x next to the field name in the top row of the table, remove the following fields:

    • message

    • event.action

    • host.name

    • user.name

    Feel free to sort the fields how it makes sense for you. Below is an example timeline table for reference.

    We have already established that the events we are looking at describe an external host talking to internal hosts for some unknown reason. It appears that this traffic is going over icmp as well which is suspicious. Most if not all of these events occur in the timeframe of a few seconds.

  1. Investigate ICMP traffic

    6-1. Find the earliest event (last event in our view) by sorting the events using the @timestamp field arrow

    There are several observations to make here. One of them is that this last event lasted for around 34 minutes. What we are seeing is a 34 minute connection over ICMP with over 7000 responding packets. Another is that all events occurring after it only occurred on one host - 172.16.100.55. We knew there were two hosts from the metric visualization earlier but now we can see why. There is enough here to start making notes for later as we investigate this event.

    6-2. Select the note button on this event, describe the event and then select Add note

    The note-taking feature in Timeline supports Markdown which gives us the option to format our notes however we want. In addition, we can include links as well as references to additional Timelines. When we are done the note will show rendered with our event which is useful for describing an event in a human form and pinning this event for us in the Timeline.

    6-3. Select the blue Save button in the top right corner to save the timeline

    Let's do some simple investigating with our new suspicious external host.

  1. Investigate suspicious host events

    7-1. Replace our previous query with related.ip: 103.69.117.6 to observe other Zeek connections from and to this IP

    7-2. Once again make sure the events are sorted properly with the @timestamp field. Our note we made should be at the top of the list.

    Looking at these events closer we can see this external host talked to another internal host of ours, 172.16.100.55 after their connection to 172.16/100.53.

  1. Create a new note

    8-1. Create another note describing this event by selecting the note button and then clicking Add note

    8-2. Select Notes

    This view shows us only our notes and we can select the arrow to expand the events themselves.

    8-3. Select Pinned

    This view of the Timeline shows only the two events we have left notes on. However, we could pin several events and they would all show up here in order based on timestamp. Just like Notes we can optionally open the event to get the entire doc instead of just the fields or add new notes to the same events.

We now have enough here to move this into a Case for investigating later.

Summary

In this lab we used Timeline to display, filter, search, and describe events of interest across our dataset.

chevron-rightLab 7.6: Security App - Caseshashtag

Lab 7.6: Security App - Cases

Objective: Use Cases to describe a set of events or investigations for the purpose of tracking incidents.

Reference Material:


Cases is how we can organize our Timelines, thoughts, progress, and other bodies of work when conducting an investigation.

  1. Open the External to Internal Traffic (Zeek Conn) timeline from the Timelines lab and attach it to a new Case

    1-1. Navigate to Elastic Security and select Timelines

    1-2. Select the Timeline we created previously

    1-3. Select Attach to Case --> Attach to new case

  1. Create a new case from the External to Internal Traffic (Zeek Conn) Timeline

    Here we can fill out details related to this case by adding tags and other relevant information. Since we are creating this Case from a Timeline, a linked reference is in the first message in our case.

    2-1. Name the new case Suspicious ICMP Traffic and fill out more information for the Case to describe it then select Create case

    Don't worry about the alert syncing or external connectors. Since the data this Timeline is referencing doesn't have alerts there is no reason for them to sync with the Case status.

    Our case has been created and we can see the rendered view with our Timeline reference and some tasks associated.

    2-2. Select Status to change this case to In progress.

  2. Observe our newly created Case

    3-1. Navigate to the Overview Dashboard

    Observe that our newly created Case is called out on the left-hand slider.

As we work this case we can add additional alerts, provide more context and involve other teams using third party systems such as Jira or ServiceNow.


Summary

In this lab we took our newly created Timeline and attached it to a Case for further investigation and triage.

Last updated