top of page
Jv Cyberguard

Cybersecurity Home Lab - Splunk logs and Troubleshooting

Updated: Jul 18, 2023

Part 6b- Ingesting logs in Splunk (Troubleshooting & Network Migration)



Part 6b- Ingesting logs in Splunk (Troubleshooting & Network Migration)





Part 6b- Ingesting logs in Splunk (Troubleshooting & Network Migration)


*Alright, here is where things got a little crazy, so here is my little spiel. I believe that my mistakes and ah hah moments in this section are some perfect learning points. They also demonstrate how to troubleshoot and research *nb* where you see red markings they represent what is actually taking place


Quick recap:

The network diagram at the very top of the page is our desired topology with the Splunk server being assigned IP address 192.168.4.10. However, when we first deployed our Splunk instance in Part 5 we placed it on the hypervisor's subnet, also known as the VMware NAT network, so the IP of the Splunk machine is currently 192.168.91.132/24 (your IP will not be the same as mine, because it would be affected by the subnet of your VMware NAT network -same principle applies).



However, according to the topology below it should sit behind the firewall on VMnet6 and be assigned 192.168.4.10 and rightfully so, because I believe it better simulates a real world corporate network environment where em0 is our gateway to the VMware NAT Network which simulates the "internet". This way, the traffic from the DC on vmnet3 will stay on the inside of the corporate network only going on to vmnet6 rather than exiting the WAN interface (em0) of the pfSense firewall.


Our Goal:


Therefore, our goal, prior to entering the IP for our Splunk machine in the Universal forwarder wizard, is to perform a network cutover (network migration) of the Splunk box (192.168.91.132) which is presently on the NAT network (reference number 1 in the image below) to VMnet6 where it will be assigned (192.168.4.10) (reference number 2 in the image below).



The process:


For about an hour, I went down a rabbit hole of trying to figure out how to do this in my lab.

The first step was to add another network adapter and place it on Vmnet 6.


The splunk SIEM now has 2 NICS. One connected the NAT virtual network and the other connected to the firewall on virtual network 6. One would think that should solve the issue. The videos that I was using to help configure this lab had the traffic between the splunk server and the windows machine going through the VMware NAT network. However, I had a different approach in mind. I wanted all traffic to be passed through the pfsense firewall and not just through a connection through the Vmware virtual network. However, anytime I tried pinging the default gateway for vmnet 6 it wasn't going through. Despite adding the second NIC, I also was not able to ping the DC at 192.168.2.10.


Here's a secret, the errors that prevented me from completing the cutover are in the screenshots below. See if you can find it before I disclose it at the end.


So I tried a couple of things to accomplish this. I firstly created a snapshot of the VM in case I made things worst instead of better.



I first had to identify the route of the packets through my network. To do this, I entered ifconfig then ip a to identify how the interfaces were configured, and finally ip route list to identify the current routes.


Ifconfig output showed I had 3 interfaces.


The Vmware Nat network (ens32) on 192.168.91.0/24, the vmnet6 connection (ens33) to the firewall, and of course the loopback interface which is essentially the machine itself.



Now to be clear, the interface on the Splunk machine that I want to receive the DC winevent logs on is through interface ens33 which is the connection to the pfSense firewall. So what I found interesting at first glance of the output above was that no IPv4 address was on the interface and if I wanted window event logs to be sent from the DC to the SIEM I needed to get one assigned. So I assigned it the static IP using the command below.


Sudo ip addr add 192.168.4.10/24 dev ens33



As you can see, I still was not able to ping the firewall at 192.168.4.1 which according to the topology map is supposed to be the SIEM's gateway.


At this point, I began to reach so I tried the command sudo iptables -L to list the firewall rules for the machine. (These are good troubleshooting commands to know in general which is why I included this in my lab.) The reason why it tried the command is that I wanted to see if there were any deny policies in place for ICMP replies.



Interestingly, I was able to PING the SIEM from the pfsense firewall so clearly ICMP was not the issue...


The next command I tried was ip route list (which outputs the routing table for the Splunk machine). I was trying to identify the default gateway that was set to see if the issue was traffic going out via the interface connected to the VMware NAT network ens32 (192.168.91.132) instead of the interface connected to the firewall ens33(192.168.4.10).


I realized the default was set to the default gateway of the adapter of the VMware interface, so I changed that to the default gateway of the adapter that was connected to the firewall which is on interface ens33.


To do this I had to delete the gateway listed using the command:

sudo ip route delete default


Then add the other gateway using the following command:

sudo ip route add default via 192.168.4.1 dev ens33



At this point, I tried pinging google.com and the server and still no success.


I had verified the any any rule for the Splunk interface on the firewall but didn't see anything there of concern so I continued searching elsewhere.


Next I tried pinging with the -I switch which allows you to specify the IP/ interface which you want the machine to ping from. However, that too was unsuccessful.




Next, I tried shutting down the interface connected to the Vmware NAT network to force a connection through the Vmnet interface connected to the firewall but I still was not able to PING the firewall nor the DC on the other subnet.



As you see below, ens32 was not listed because it was down.


Now that it was down, the route list looked much better but it still did not work.


It was at that point I decided to revisit pfSense to see if I'd be able to find anymore clues there.


Then while watching the page below it clicked, ICMP is a layer 3 protocol not a layer 4 so it was getting blocked.



So I probably was connected all along but not being able to ping the server made me think otherwise.


Once I changed the protocol to any at the firewall, I was able to ping bi-directionally.



Now I did change a bunch of things, so what I am curious to know is does the other interface truly have to be disabled or it goes out the correct interface automatically. So I re- enabled the interface and the result was below.


The machine now had 2 default gateways one on each interface.



But I was still able to ping the server.




So my conclusion is that, the default gateway had to be added for sure but the main issue was the firewall config.

Before, I make these routing changes persistent(https://www.howtogeek.com/799588/how-to-set-the-default-gateway-in-linux/) , I will restart to allow the network configs to go to the original state and see one more time what fixed the issue.


What's cool too is that security onion Suricata logs actually caught all the action which shows the span port is working well.


After the restart, everything was lost so I figured I would make it persistent by updating the configuration file accordingly. Leveraged the following resources to do this.



Navigated to etc/netplan/

Then ran sudo gedit NameOfConfigFile.yaml

It will open the Ubuntu server network config file

It will be bare, but below is what I added.

I disabled the ens32 interface (connected to the vmware NAT network) so that it would persist past reboots by adding the activation-mode:off to the file.

I set the default gateway by entering the firewall interface.

Routes:

-to: default

via: 192.168.4.1

Note that whitespace is important. Make sure each successive level of indentation is two spaces, and take care to include the hyphen ” -” in the “- to:” line. This will set a default route to the default gateway at 192.168.4.1. I also set my nameservers as my domain and also the DNS servers.



To apply changes, click save and then enter sudo netplan apply or sudo netplan try if you want to test out the settings first in another terminal window.


Now it works perfectly passed reboots. I may try to enhance it at some point, but I can simply switch the link to up for the other interface if I need to. Based on my research having two default gateways is a very technical task so I will only keep the connection to the firewall for now and leave the other interface down.



So now we can return to Splunk forwarder and enter the ip address. We have confirmed the path that it will be traversing. We can now enter the ip address according to the topology and the default port that is prompted.




And then on the next page enter the IP again with the default port which is prompted which is 9997.

When it is done we can go into Splunk now.


On the Homescreen click add data.



We will click Forwarder on the next page




And behold, our DC.. it's now visible in splunk.


Click on the machine name and it will go over to the selected host. Also in new server class name field you can enter: Domain Controller. Then click next.


For now we will be monitoring all Local event logs from our DC.




Remember Splunk stores the data in an index, we will use the one that we created.



Click review and submit. You can verify against mine



If you go back to indexes in setting. You will see that the data is starting to increase.


We can even search to see what was logged.


So that is my version of the lab for now. I plan to document another part of the lab where I harden the firewall. In the meantime, feel free to show initiative and build out on the lab.




682 views0 comments

Comments


bottom of page