Manager Installation and Administration
This document describes the installation and administration of the Manager in an On-Premises environment.
About Manager
The Manager collects information from Sensor appliances, processes the data, and presents it to the user. The Manager receives artifacts (such as executables and documents) that are downloaded or otherwise acquired by users and passes them to Analyst for immediate analysis. The results of the analysis are collected and presented to the user via a web portal using an incident-centered approach, in which evidence from run-time analysis, network monitoring, and anomaly detection are correlated to provide prioritized and actionable threat intelligence.
The Manager is also responsible for acquiring the latest network behavior models that are associated with malware activity. These are automatically downloaded from the VMware backend.
The Manager provides a dashboard, the User Portal, from which you manage the other VMware NSX Network Detection and Response appliances in your network.
The Manager is offered to customers with stringent privacy and policy constraints as part of an On-Premises deployment configuration. In this configuration, the Manager stores all the information regarding the detection of infected hosts and the analysis of software artifacts locally within your data center.
If you do not have such strict privacy requirements, we recommend you use the Manager component that is hosted in the NSX Cloud.
Supported Hardware
Refer to Hardware Specifications for details about the hardware certified for use with VMware NSX Network Detection and Response appliances.
Network Connectivity
The installation and update services need to connect to external servers for downloading software and data bundles (such as sandbox images). All hosts that are contacted for such downloads are listed in this section.
To increase the availability and reduce download times, the system can be configured to download large files from content distribution network (CDN) servers. As such hosts are geographically distributed, the contacted hosts may vary from system to system, and hosts outside the documented list may be contacted for downloads.
The use of CDNs is enabled by default. You can also explicitly enable or disable
this feature with the lastline_register
command (see Register the Manager, 8).
If you explicitly enable the use of CDNs or choose to accept the default, ensure that you adjust your firewall rules to allow access to the CDN servers.
Domain Names
The server hosting the Manager needs to be able to connect to:
-
user.lastline.com (for EMEA customers user.emea.lastline.com)
on TCP port 443. -
log.lastline.com (for EMEA customers log.emea.lastline.com)
on TCP port 443. -
update.lastline.com (for EMEA customers update.emea.lastline.com)
on TCP port 443 . -
ntp.lastline.com
on UDP port 123 for time synchronization. It can be replaced with a local NTP server. -
The Manager must be able to communicate with every Data Node on TCP port 9200 to support an Elasticsearch cluster.
-
anonvpn.lastline.com
on UDP port 1194. This is not mandatory, but highly recommended, as the lack of this connection can negatively impact the performance of the analysis engine. For further details, see Configure Analysis Traffic Routing.
You can add FQDNs such as the CDN domain for Google. For further details and information about VMware NSX Network Detection and Response CDN operation, see VMware Knowledge Base article NSX Lastline CDN Usage (900006).
Expected IP Addresses
The domain names above may resolve to any IP addresses within the following ranges:
-
38.95.226.0/24
-
38.142.33.16/28
-
199.91.71.80/28
-
46.244.5.64/28
-
66.170.109.0/24
All connections can be optionally routed through an HTTP/HTTPS proxy (see "Registration and Configuration", 4). Proxy authentication is not supported.
Acquire the Manager ISO
To install the Manager, you must download the ISO from VMware.
DNS Setup
As part of the license registration, the system must be associated with a fully qualified domain name and corresponding certificate.
Assuming that the FQDN lastline.example.com
was set for the Manager, you must ensure that the following names all correspond to the same IP address to allow the Sensor nodes to download updates and upload
alerts as well as to allow access to the User Portal
running on the system:
-
user.lastline.example.com
-
update.lastline.example.com
-
log.lastline.example.com
Determine the IP address of the server by running the ifconfig
command on
the console.
The installation domain name must always specify a top-level
(root) domain, such as .com
, .edu
, or
.gov
.
For an Active-Standby installation, to allow access to the
standby Manager, you must ensure that the IP
address of the server hosting that Manager is
correctly mapped to the domain name, user.standby.lastline.example.com
in
your DNS resolver. We recommend that you use a virtual IP
address to allow for seamless fail-over.
SSL/TLS Certificate
All services on the Manager are accessible through HTTPS only. The Manager generates and uses a self-signed SSL certificate. This requires all managed appliances to store and trust this certificate during the registration phase.
If required, you can replace the SSL/TLS certificate on the Manager.
Install Manager
The installation process for the Manager consists of three steps. In the first step, the base system is installed. In the second step, basic configuration information is collected and the configuration is applied to the system. In the final step, required data is retrieved from the VMware backend servers.
Enable SSH access
Installation of the Manager will take 3 or
4 hours for most environments and may take longer in environments with restrictive proxy
settings. Because of this initial install duration, it may be more convenient to enable SSH
access to the appliance before you launch the lastline_register
command.
Base System Installation
The Manager uses Ubuntu Server 18.04 (Bionic distribution) as its underlying operating system. Therefore, many of the steps of the installation are similar to the ones required to install Ubuntu Server. Refer to the Ubuntu guide, Installing Ubuntu 18.04.
Many of the steps involved in a standard Ubuntu installation have been automated and hidden from the Manager Installer.
If you are running an existing installation with appliances based on an earlier Ubuntu release, you should upgrade to a version based on Bionic. To upgrade to Bionic from Xenial, you must first update the Manager to the last version that supports Xenial (see the release notes for your specific version, and then follow the instructions on the linked support article).
Before starting the installation of the Manager software, the RAID controller must be configured in RAID10 and supports write-caching. You must ensure your RAID controller is configured appropriately.
Install on VMware ESXi
Before you install the Manager on VMware ESXi, you must ensure the VM meets the minimum hardware specifications for the class of appliance. See Hardware Specifications for details. Ensure that the base hardware runs on an Intel CPU.
Using the VMware ESXi vSphere client 7.0 update 3, create a new virtual machine and configure it to meet the requirements of the Manager.
Registration and Configuration
To register and apply the software configuration to the Manager, you must login to the server console.
Register the Manager
The registration process runs some tests to check hardware compatibility. The configuration is then applied to the machine. This process may take a while (20-40 minutes) depending on your network connectivity and system characteristics.
After the completed prompt is displayed, select <Ok>
or press
Enter to
exit from the registration process.
Acquire Sandbox Images
Manager must download the images used by the malware analysis sandbox component from the VMware backend servers. The image files consist of approximately 30 GB of compressed data. This step might take several hours, depending on the available network bandwidth.
Sideload Sandbox Images
If you had previously downloaded the sandbox images, you may want to sideload them onto the new Manager rather than downloading them again from the VMware backend.
Deploy a New Certificate
You can optionally replace the SSL/TLS certificate on the Manager. Assuming the Manager has a FQDN of lastline.example.com
,
the certificate needs to be valid for:
-
user.lastline.example.com
-
log.lastline.example.com
-
update.lastline.example.com
-
user.standby.lastline.example.com
We recommend using user.lastline.example.com
as the
commonName for the certificate. You should then specify the domain
names above as Subject Alternative Name (SAN). This way
user.lastline.example.com
will work even for clients that do not support
SAN. The certificate needs to be in x509 format. Intermediate CA certificates need to be
appended to the server certificate file.
To create a private certificate using the openssl
command and then deploy
it on the Manager, perform the following
steps:
Trust the New Certificate
To add an SSL/TLS certificate to the set of certificates trusted by Manager, perform the following steps.
The following steps must be completed on all appliances (including the Manager).
Reinstall the Manager
If the Manager needs to be replaced or reinstalled, you must contact VMware Support to have your license re-enabled. You should specifically request that VMware Support "re-initialize the license" for your installation.
Administer the Manager
The Manager was developed to require as little maintenance and administration as possible.
The following topics describe how to customize and configure some of the advanced features of the Manager.
Configuration Tool
Use the VMware
NSX Network Detection and
Response configuration tool, lastline_setup
, to administer and manage the Manager.
If you encounter an error running any of the lastline_setup
command options, make a note of
the error message returned and contact VMware Support.
Network Configuration
You can easily change the network configuration of the Manager. This may be needed if its assigned IP address changes (for example, upon a reconfiguration of the network).
Reconfigure for DHCP
To enable a network configuration using DHCP, use the network
option of
the lastline_setup
command.
Reconfigure for Static Addressing
To enable a network configuration using a static IP, you must provide values for the
address, netmask, gateway, and
dns_nameservers parameters. Use the network
options of
the lastline_setup
command to make these changes.
Reconfiguration After Network Update
After a new network address has been assigned to Manager (for example, after changing the static network address), the new configuration must be applied to all software on the host.
SMTP Configuration
Manager can be configured to send
notifications or reset account passwords via email. To configure the way emails are sent,
use the email
options of the lastline_setup
command.
Configure Analysis Traffic Routing
The Manager is configured by default so that traffic generated inside the VMware NSX Network Detection and Response analysis sandbox is routed to the Internet via a secure tunnel. This tunnel enables anonymizing the public IP of client connections, hence the component name, AnonVPN (Anonymization VPN). In addition to anonymizing the public IP, AnonVPN periodically rotates the IP with which connections to the Internet are made to avoid getting marked as malicious and blocked by third-party software when connecting to malware command-and-control infrastructure. The tunnel also prevents malware running inside the sandbox from accessing services in the local network. By routing traffic to outside the local network, only services reachable via public IPs are accessible to programs running inside the sandbox.
If you do not want to make use of the AnonVPN feature, the lastline_setup
configuration utility allows you to
specify a custom method for routing network connections with its anonvpn_mode
option. The following three values are supported:
-
lastline
— Analysis traffic is routed via a secure tunnel using the default configuration. -
honeypot
— Analysis traffic is not routed to the Internet. Instead any connections established inside the sandbox are redirected to a honeypot on the appliance. -
custom
— Analysis traffic is routed via a dedicated interface that you have configured.
Configure Default AnonVPN
Manager uses a VPN to route traffic originating in the analysis sandbox. The VPN only routes outgoing connections and response packets. Thus, the VPN blocks any in-bound connections.
The AnonVPN configuration routes analysis traffic from Engine appliance to the Internet via Manager. Thus, AnonVPN only needs to be configured centrally on Manager.
The lastline
option is the default and only needs to be configured if
you had previously chosen one of the other options.
Configure Honeypot AnonVPN
The system supports the analysis of artifacts in a completely isolated network, without any outgoing connectivity. Because programs often require access to certain services on the Internet to function, the system emulates a set of services that use well-known protocols, such as (but not limited to) DNS, FTP, HTTP, HTTPS, and SMTP.
Any outgoing traffic using an unknown protocol is blocked to avoid accessing services in the local network.
In honeypot
mode, the analysis of URLs in the sandbox will fail. Since
no traffic is allowed on the Internet, when the analysis engine attempts to access a URL
that was submitted for analysis, it is unable to open the connection to the URL, and
reports an error. As a consequence, the URL analysis fails and no report is generated.
When running a honeypot without connectivity to the VMware backend, you should disable the cloud analysis component to avoid waiting for analysis metadata. See Configure Cloud Analysis for further details.
Configure Custom AnonVPN
To customize routing of analysis traffic, you must configure a dedicated network interface
on Manager using the
/etc/network/interfaces
configuration file. This configuration file is
documented in the Ubuntu man-pages.
This interface can be a physical interface (such as eth3
) or a virtual
interface (such as an OpenVPN tunnel interface tun0
). This interface has
the following requirements:
-
The configuration must happen via
/etc/network/interfaces
. -
The interface must use IPv4.
-
The interface either uses a static IP or must be configured to invoke
/etc/anonvpn/routing_interface_up.sh
command when the interface is assigned an IP address. This command is needed to trigger setup of packet routing. For OpenVPN connections this command can be invoked using the--up
parameter. -
The interface must not be called
llanonvpn0
orllanonvpn1
as these interface names are reserved for connecting Engine appliances to the local system or for interfaces in AnonVPNlastline
mode.
In addition to the interface configuration, you must provide the following information to enable custom routing:
-
DNS server IP address — The IPv4 address of the DNS server which will be used for resolving domains inside the analysis sandbox. The DNS server must be reachable over the provided interface. DNS requests from the analysis engine will be routed over the same link as other analysis traffic.
-
Gateway IP address — The IPv4 address of the gateway for routing packets on the custom interface. The gateway address must not be configured via
/etc/network/interfaces
to avoid routing non-analysis traffic via this interface.Note:The gateway is optional for point-to-point connections, such as connections established through OpenVPN.
Manager uses custom VPN connection to route traffic originating in the analysis sandbox. The VPN only routes outgoing connections and response packets. Thus, the VPN blocks any in-bound connections.
To switch AnonVPN to using the custom network
interface, ensure that the interface is up (use ifup
<interface-name>
, for example, ifup
tun0
) and then use the anonvpn
options of the
lastline_setup
command.
It is possible to route analysis traffic via the primary network interface on Manager. This configuration is highly discouraged as it gives a sample under analysis full access to the local network. It is your responsibility to block any potentially malicious connections routed this way. The routing of analysis traffic via a custom network interface does not use a proxy even if one is configured.
Update Fully Qualified Domain Name
You can update the FQDN of the Manager. This also creates a new self-signed certificate associated with the FQDN.
For a high-availability configuration, you must copy the certificate and private keys to the Standby Manager (see Update Active Manager FQDN for detailed instructions), and then set the corresponding FQDN on the Standby Manager.
After you complete the following steps, you must update all the VMware NSX Network Detection and Response appliances managed by Manager to use the new FQDN. Refer to Update On-Premises Manager FQDN in the respective appliance installation guides.
Configure the Analysis Upload-Size Limit
By default, the VMware NSX Network Detection and Response rejects uploads of files for analysis that are larger than 10 MB. This value provides a reasonable compromise between the ability to analyze the vast majority of malicious artifacts and having to store overly large files. If required, you can modify this limit up to 200 MB.
Configure Data Retention
The VMware NSX Network Detection and Response tracks all of the stored files on the appliance and issues a notification through the User Portal interface when usage of the local file-system disk exceeds certain thresholds.
Periodically, large analysis artifacts (such as the metadata that an analysis generates),
are deleted according to data-retention policies that can be updated using the lastline_setup
command. The following is a full
list of data-retention options:
-
data_retention_uploads
— Files uploaded for analysis. -
data_retention_screenshots
— Screenshots taken during the dynamic analysis of a file submitted for analysis. -
data_retention_traffic_captures
— Network traffic captured during the dynamic analysis of a file submitted for analysis. -
data_retention_generated_files
— Files generated by a program during the dynamic analysis of a file submitted for analysis. -
data_retention_memory_dumps
— Memory allocation by a program during the dynamic analysis of a file submitted for analysis. -
data_retention_process_dumps
— Full-process snapshot of a program during the dynamic analysis of a file submitted for analysis. -
data_retention_webpages
— Web-page content captured during the analysis of a URL submitted for analysis. -
data_retention_code
— Web-code captured during the analysis of a URL submitted for analysis.
To avoid specific file-types from being affected by the data-retention policies, you can
use the value unlimited
(or 0
).
The following steps show how to define your configuration to discard files generated during an analysis run after 90 days, but to keep files uploaded for analysis indefinitely:
Configure Cloud Analysis
The VMware NSX Network Detection and Response cloud analysis component extends analysis results generated in the local On-Premises installation by querying and sharing data with the VMware backend.
This component allows an individual installation to contribute to and benefit from the global intelligence collected by VMware, Inc.. As a consequence, the analysis results generated when cloud analysis is enabled may be more accurate and may contain additional pieces of information (such as, file origin information, threat classification, more up-to-date analysis results). At the same time, sharing data with VMware, Inc. may not be desirable or even allowed in certain situations. Therefore, the cloud analysis component offers a number of configuration options to let you decide exactly what information gets shared.
-
cloud_analysis
— When this option is enabled, your installation shares the hashes (MD5, SHA1, and SHA256) of the analyzed artifacts with the VMware backend. For file artifacts, the actual content is not uploaded to the VMware backend. -
cloud_analysis_push_download_source
— When this option is enabled, your installation shares the IP address and hostname of the server where the artifact was downloaded from with the VMware backend. -
cloud_analysis_push_download_metadata
— When this option is enabled, your installation shares the URL where the artifact was downloaded from (HTTP, FTP, and SMB downloads) with the VMware backend. In the case of HTTP downloads, the referrer information is also shared, if available. -
cloud_analysis_query_url_reputation
— When this option is enabled, your installation queries the VMware backend for metadata that can be included in the URL classification. Note that the full URL is shared with the VMware backend.
When the analysis system detects a malicious file or URL, it is possible to notify the VMware backend about the detection by uploading the artifact content. Sharing this information helps us and the security community by increasing the global intelligence, while limiting your sharing to malicious files minimizes the risk of exposing sensitive files.
To configure the sharing of malicious files, review the Data sharing tab of the Appliances → Configuration pages provided by the User Portal running on your Manager.
Configure the Analysis Queue
In certain situations, it can be convenient to automatically drop tasks scheduled for analysis from the queue. This way even systems with limited resources can guarantee analyzing submitted artifacts in a timely manner, even when temporarily overloaded with a large number of submission.
The VMware NSX Network Detection and Response allows this by a configuration option that automatically deletes tasks from the analysis queue that have been pending for more than the specified number of days.
Configure Remote Assistance
By default, VMware
NSX Network Detection and
Response provides a mechanism to
allow the VMware Support team to perform remote
administration assistance on your Manager,
when requested. You can disable this access with the lastline_setup
command.
Should you need to contact VMware Support, the VMware, Inc. technician will probably request that you temporarily re-enable the support channel.
Enable the monitoring user
The Manager has a
monitoring user who can access the system using console or via SSH
(password only without using the SSH key). To enable the monitoring user, use the
monitoring_user_password
option of the lastline_setup
command.
Once the monitoring user is enabled, you can SSH to the Manager using that account:
server# ssh monitoring@ip_appliance
monitoring@ip_appliance's password:
...
monitoring@lastline-manager:~$
Enable Password-Based SSH Authentication
The Manager supports specifying
users who can access the system using console or via SSH (password only without using the
SSH key). To enable existing users to authenticate with password-based SSH use the
enable_additional_password_auth_ssh_usernames
option of the
lastline_setup
command.
Once the user has been added, you can SSH to the Manager using that account:
server# ssh ghopper@ip_appliance
ghopperg@ip_appliance's password:
...
ghopper@lastline-manager:~$
Manage Engine Appliances
In certain deployment scenarios it can be useful to disable a subset of Engine appliances from processing analysis
tasks. For this purpose, the system provides a utility for marking individual Engine appliances as inactive
,
meaning that they will not be assigned any work.
Use the lastline_configure_engine_availability
command to obtain a list of
Engine appliances, to mark specific appliances
as inactive, and to re-enable appliances that have been previously disabled.
Configure VMware ESXi HA for Virtualized Manager
You can configure high availability (HA) using either vSphere HA settings or the Active-Standby settings in the VMware NSX Network Detection and Response. Both settings cannot be configured simultaneously.
Install the Manager on VMware VMware ESXi. You must ensure the VM meets the minimum hardware specifications for the class of appliance. See Hardware Specifications for details.
Create a virtual machine and configure vSphere HA settings.
Enable Active-Standby
To support an active-standby configuration, the VMware NSX Network Detection and Response system can deploy two Manager appliances operating in parallel. One Manager, referred to as the active Manager is the primary appliance handling user requests and communication with Sensor and Engine appliances. The secondary Manager, referred to as the standby Manager, synchronizes all data from the active Manager to allow it to seamlessly take over operation of the active Manager in case of critical software or hardware failures.
Active-Standby Prerequisites
Before you can configure Manager for an active-standby environment, you must ensure the following requirements are met:
-
Define a fail-over virtual IP address. A virtual IP address allows the standby Manager to seamlessly take over from the active Manager.
-
If you do not use a virtual IP address, you should modify your DNS setup to respond quickly to name record changes to the IP address of Manager. This can be achieved, for example, by using short DNS TTL (time-to-live) values.
-
Ensure you have console access to both the active and standby Manager appliances.
-
The name of the default network interface on both the active and standby Manager appliances must be at most 10 characters. The kernel limits IP address labels to 15 characters, and a 5-character suffix will be added to identify the virtual IP address.
Configure a Fail-over Virtual IP Address
VMware strongly recommends that you configure a shared virtual IP address for the active and standby Manager. This IP address does not correspond to a physical address and can switch from one manager to the other. Initially associated with the active Manager, the virtual IP is automatically moved to the standby Manager on takeover so that requests to the virtual IP address can be seamlessly served first by the former active Manager, then by the new active Manager as soon as the takeover process is completed.
To function correctly, the virtual IP address has to be in a subnet range common to both the active and standby Manager, must be configured on both appliances, and must be the same on both appliances.
Important:
To enable seamless fail-over, you must ensure that all the VMware NSX Network Detection and Response appliances that are managed by the active and standby Manager are reconfigured to use the virtual IP address you define. |
If an Engine appliance is not registered to the
Manager Virtual IP configured in
lastlinesetup
-> failover_virtual_ip
, the Engine will report _Error: Traffic
Routing Check Upstream: Running check on interface llanonvpn0 reported error, repair
failed: Failed to resolve interface address. To change the FQDN or IP use
step 4 listed in Update Active Manager FQDN.
The use of DHCP to assign network addresses to the Manager can interfere with the internal mechanism that is used to manage the shared virtual IP address. Therefore, we highly recommend you use a static address configuration when deploying an active-standby environment. You can reconfigure Manager to use static IP addresses.
The tools used for managing the shared virtual IP use multicast UDP packets for
communication between active and standby Manager
appliances. The IP multicast datagrams are sent with a TTL of 1 to restrict communication
to nodes in the same subnet. You can configure the multicast address and port used for
this purpose using the failover_multicast_address
and
failover_multicast_address
options of the lastline_setup
command.
If not explicitly set, the default multicast socket 226.94.1.1:5405
will
be used.
Check that this configuration is the same on both active and standby Manager by using the show
option
of the lastline_setup
command.
It is also very important to avoid having the same multicast address/port configuration on more than one active-standby pair in the same subnet, as it will lead to conflicts. This is because when using the same multicast configuration, active-standby pairs will receive multicast packets from other active-standby pairs in the same subnet and interfere with each other.
If you configure the multicast address and port differently for each pair, you can have multiple active-standby pairs in the same subnet.
Priority and password
User Portal implements the keepalived daemon to manage its active/standby capability. The Virtual Router Redundancy Protocol (VRRP) underpins failover. It consists of a finite state machine providing low-level, high-speed interactions.
In most cases, the active/standby processes are managed transparently. However, there are a couple of toggles that you need to be aware of:
Priority
The system automatically elects the primary active Manager. This process determines the ownership of the shared virtual IP (VIP) address in the active/standby configuration.
After 48 takeovers, the active Manager will have
the maximum supported priority for the purposes of electing which host owns the VIP address.
This can create a problem when setting up a new standby Manager. In this scenario, you should use the
ha_active_priority
option of the lastline_setup
command to set a lower value (at
least 2; the initial default is 4) before you replicate the
active Manager.
When manually changing ha_active_priority
for an existing active/standby
pair, you should take care to ensure that the same value is set for both systems. In order
to maintain continuous service, modifications should be performed in the following
order:
-
Increase
ha_active_priority
on the active Manager first, then on the standby Manager. -
Decrease
ha_active_priority
on the standby Manager first, then on the active Manager.
The syntax of the ha_active_priority
option is as follows:
ha_active_priority [= priority | -]
With no argument, display the current value of ha_active_priority
. If an
argument is provided, set ha_active_priority
to the specified value. If the
argument is -
(dash), clear (unset)
ha_active_priority
.
Password
A password for the VRRP instance is automatically generated on the active Manager and propagated to the standby Manager (assuming
failover_virtual_ip
was set before you replicated the active Manager).
Typically you do not need to alter this password.
However if you desire a different password, use the ha_password
option of
the lastline_setup
command. You must run
this on both the active and standby Manager. The
syntax of the ha_password
option is as follows:
ha_password [= password | -]
With no argument, display the current ha_password
(displayed as
***
). If an argument is provided, set ha_password
to the
specified value. If the argument is -
(dash), clear
ha_password
(set to empty value).
Reconfigure the Existing Manager
Reconfigure your existing Manager to prepare it to become the active Manager in an active-standby environment. The following steps need to be performed on the original Manager before setting up the standby Manager:
Install Standby Manager
To install and configure the standby Manager, perform the following steps:
At this point the registration process prepares the system to support an active-standby pair.
Replicate Active Manager
To synchronize the active-standby pair, you must replicate a backup from the active Manager onto the new standby Manager. Use the
lastline_restore_point_load
command the perform this operation.
When the restore process completes, all of the data from the active Manager is synchronized onto the standby Manager.
Update Active Manager FQDN
If the FQDN of the active Manager changes, you must propagate this configuration onto the standby Manager.
Trigger Fail-over
In case of failure of the active Manager, the standby Manager will take over and become the new active Manager. Trigger this process with the following steps:
Fail-over Using the UI
As an alternative, you can trigger the fail-over from your browser. Access the User Portal running on the standby Manager. Connect using its FQDN (for example,
user.standby.lastline.example.com
) or IP address. You must login with an
account having administrative privileges (for example, the default lastline
user account. The browser displays a Standby Manager page. Trigger the
fail-over by clicking the Trigger Takeover button.
Click the Confirm Takeover button to start the fail-over process. If successful, a confirmation message is displayed.
Fail-over Process
If a shared virtual IP addressed had been previously configured on both active and standby Manager appliances, standby Manager will start serving requests on that address.
Otherwise, you must change the DNS setup such that the fully qualified domain name record of active Manager now points to the IP address of standby Manager.
Once the changes in the DNS system have been pushed out, Sensor and Engine nodes will contact Manager, which is now active.
After a fail-over has occurred and standby Manager has become the new active Manager, the full backup performed in "Reconfigure the Existing Manager", 1 can no longer be reused to setup the new standby Manager. Before you can setup the new standby Manager, a full backup of active Manager must first be performed.
If you plan to add additional Engine appliances to the new active Manager, you must download and install the malware analysis sandbox images. See Acquire Sandbox Images.
Test the Manager
Check the state of the Manager with the
lastline_test_appliance
command.
Disable Automatic Updates
VMware periodically releases appliance updates or hotfixes. By default, automatic updates are enabled on newly installed appliances. As long as the appliance has automatic updates enabled, these updates and fixes will transparently be applied to the system.
If you prefer to manually update the Manager, follow these steps to disable automatic updates.
Manual Updates
If you have disabled automatic updates for your appliances you must apply updates and hotfixes manually.
Follow these steps to manually update an appliance.
About Hardening
During the development process, steps were taken to lock down the Manager by default to help reduce any attack surfaces. These include:
-
Default Applications — All unnecessary applications included in the base Ubuntu server build have been removed from the system. What remains are the libraries and applications necessary for the normal functioning, routine maintenance, and troubleshooting of the Manager.
-
Default Firewall — The Manager image comes with Uncomplicated FIrewall (UFW) installed and configured to restrict inbound access to the system.
-
Security Patches — The system will install daily OS security updates by default. You can disable automatic updates.
-
Least privilege — VMware has taken care to ensure a paradigm of least privilege regarding the permissions of services and file system access.
-
Secure SSH — SSH is configured to use certificate-based authentication by default.
-
TLS encryption — Communications between the appliances are TLS encrypted.
Harden the Manager
We recommend the following guidelines for hardening the Manager after installation. These steps are not required, but they will allow you to further restrict access to your VMware NSX Network Detection and Response appliances.
Hardware Specifications
The hardware certified for use with VMware NSX Network Detection and Response appliances is listed below:
Dell Hardware
Supported Dell Hardware
Manager | |
---|---|
Server Model | Dell PowerEdge R450 |
CPU Type |
|
CPU Quantity | 1 CPU |
Minimum RAM | 96 GB |
RAID Controller |
Dell EMC PowerEdge RAID Controller (PERC) H745/H755 (with flash-backed cache) |
RAID Configuration |
RAID 10 Note: If the Dell website does not allow RAID 10 configuration from
factory, purchase the server with RAID unconfigured and then manually create a RAID 10
virtual volume before software installation.
|
Persistent Storage | Recommended: 4 × 4 TB HDDs |
Additional Network Card | None |
Redundant Power Supply | Recommended for reliability |
iDRAC9 Enterprise | Recommended for remote management and installation |
Data Node | |
---|---|
Server Model | Dell PowerEdge R450 |
CPU Type |
|
CPU Quantity | 1 CPU |
Minimum RAM | 96 GB |
RAID Controller |
Dell EMC PowerEdge RAID Controller (PERC) H745/H755 (with flash-backed cache) |
RAID Configuration |
RAID 10 Note: If the Dell website does not allow RAID 10 configuration from
factory, purchase the server with RAID unconfigured and then manually create a RAID 10
virtual volume before software installation.
|
Persistent Storage | Recommended: 4 × 2 TB 10k RPM HDDs |
Additional Network Card | None |
Redundant Power Supply | Recommended for reliability |
iDRAC9 Enterprise | Recommended for remote management and installation |
Engine | |
---|---|
Server Model | Dell PowerEdge R450 |
CPU Type |
|
CPU Quantity | 1 CPU |
Minimum RAM |
128 GB Recommended: 4 GB per CPU virtual core |
RAID Controller |
Dell EMC PowerEdge RAID Controller (PERC) H745/H755 (with flash-backed cache) |
RAID Configuration | RAID 1 |
Persistent Storage | Minimum: 2 × 1 TB HDDs |
Additional Network Card | None |
Redundant Power Supply | Recommended for reliability |
iDRAC9 Enterprise | Recommended for remote management and installation |
Sensor — 1G Networks | |
---|---|
Server Model | Dell PowerEdge R450 |
CPU Type |
|
CPU Quantity | 1 CPU |
Minimum RAM | 64 GB |
RAID Controller |
Dell EMC PowerEdge RAID Controller (PERC) H745/H755 (with flash-backed cache) |
RAID Configuration | RAID 1 |
Persistent Storage | Minimum: 2 × 1 TB HDDs |
Additional Network Card | Intel i350 Quad Port 1GbE |
Redundant Power Supply | Recommended for reliability |
iDRAC9 Enterprise | Recommended for remote management and installation |
Sensor — 10G Networks | |
---|---|
Server Model | Dell PowerEdge R450 |
CPU Type |
|
CPU Quantity | 2 CPUs |
Minimum RAM | 128 GB |
RAID Controller |
Dell EMC PowerEdge RAID Controller (PERC) H745/H755 (with flash-backed cache) |
RAID Configuration | RAID 1 |
Persistent Storage | Minimum: 2 × 1 TB HDDs |
Additional Network Card | Intel X710 Dual Port 10GbE |
Redundant Power Supply | Recommended for reliability |
iDRAC9 Enterprise | Recommended for remote management and installation |
Previously Supported Dell Hardware
The following Dell hardware are no longer supported.
Manager | |
---|---|
Server Model | Dell PowerEdge R440 |
Chassis Type | Chassis with Hot-plug Hard Drives |
CPU Type | Intel® Xeon® Silver 4114 — or better (minimum 12 threads/cores) |
CPU Quantity | 1 CPU |
Minimum RAM | 64 GB ECC RAM |
RAID Controller | HW RAID10 |
RAID Configuration |
|
Minimum Persistent Storage | 4 × 2 TB 7.2K RPM SATA 6Gbps 3.5in |
Power Supply | Dual Hot-plug Power — Optional |
iDRAC9 Enterprise | Optional |
ProSupport Service Plan | Optional |
Data Node | |
---|---|
Server Model | Dell PowerEdge R440 |
Chassis Type | Chassis with Hot-plug Hard Drives |
CPU Type | Intel® Xeon® Silver 4114 — or better (minimum 24 threads/cores) |
CPU Quantity | 1 CPU |
Minimum RAM | 64 GB ECC RAM |
RAID Controller | HW RAID10 |
RAID Configuration |
|
Minimum Persistent Storage | 2 × 1 TB SATA HDD |
Power Supply | Dual Hot-plug Power — Optional |
iDRAC9 Enterprise | Optional |
ProSupport Service Plan | Optional |
Engine | |
---|---|
Server Model | Dell PowerEdge R440 |
Chassis Type | Chassis with Hot-plug Hard Drives |
CPU Type | Intel® Xeon® Silver 4114 — or better (minimum 20 threads/cores) |
CPU Quantity | 1 CPU |
Minimum RAM | 96 GB ECC RAM |
RAID Controller | HW RAID10 |
RAID Configuration |
|
Minimum Persistent Storage | 2 × 1 TB SATA HDD |
Power Supply | Dual Hot-plug Power — Optional |
iDRAC9 Enterprise | Optional |
ProSupport Service Plan | Optional |
Sensor — 1G Networks | |
---|---|
Server Model | Dell PowerEdge R440 |
Chassis Type | Chassis with Hot-plug Hard Drives |
CPU Type | Intel® Xeon® Silver 4114 — or better (minimum 20 threads/cores) |
CPU Quantity | 1 CPU |
Minimum RAM | 32 GB ECC RAM |
RAID Controller | HW RAID10 |
RAID Configuration |
|
Minimum Persistent Storage | 2 × 1 TB SATA (7.2K RPM) HDD |
Power Supply | Dual Hot-plug Power — Optional |
Network Card | Intel Ethernet I350 Quad-Port 1Gb Server Adapter |
iDRAC9 Enterprise | Optional |
ProSupport Service Plan | Optional |
Sensor — 10G Networks | |
---|---|
Server Model | Dell PowerEdge R440 |
Chassis Type | Chassis with Hot-plug Hard Drives |
CPU Type | Intel® Xeon® Silver 4114 — or better (minimum 20 threads/cores) |
CPU Quantity | 2 CPUs |
Minimum RAM | 128 GB ECC RAM |
RAID Controller | HW RAID10 |
RAID Configuration |
|
Minimum Persistent Storage | 2 × 1 TB SATA (7.2K RPM) HDD |
Power Supply | Dual Hot-plug Power — Optional |
Network Card | Intel Ethernet X710-DA2 10Gbps network card |
iDRAC9 Enterprise | Optional |
ProSupport Service Plan | Optional |
All-In-One | |
---|---|
Server Model | Dell PowerEdge R440 |
Chassis Type | Chassis with Hot-plug Hard Drives |
CPU Type | Intel® Xeon® Silver 4114 — or better (minimum 20 threads/cores) |
CPU Quantity | 2 CPUs |
Minimum RAM | 128 GB ECC RAM |
RAID Controller | HW RAID10 |
RAID Configuration |
|
Minimum Persistent Storage | 4 × 2 TB 7.2K RPM SATA 6Gbps 3.5in |
Power Supply | Dual Hot-plug Power — Optional |
Network Card | Intel Ethernet X710-DA2 10Gbps network card |
iDRAC9 Enterprise | Optional |
ProSupport Service Plan | Optional |
Analyst | |
---|---|
Server Model | Dell PowerEdge R440 |
Chassis Type | Chassis with Hot-plug Hard Drives |
CPU Type | Intel® Xeon® Silver 4114 — or better (minimum 12 threads/cores) |
CPU Quantity | 1 CPU |
Minimum RAM | 96 GB ECC RAM |
RAID Controller | HW RAID10 |
RAID Configuration |
|
Minimum Persistent Storage | 4 × 2 TB 7.2K RPM SATA 6Gbps 3.5in |
Power Supply | Dual Hot-plug Power — Optional |
iDRAC9 Enterprise | Optional |
ProSupport Service Plan | Optional |
HPE Hardware
- Manager
-
Intel® Xeon® Silver 4114 2.2GHZ
64 GB RAM
4 × 2 TB in RAID 10 (6 Gbps SATA)
On-board NIC
- Data Node
-
Intel® Xeon® Silver 4114 2.2GHZ
64 GB RAM
4 × 2 TB in RAID 10 (SAS 10K RPM)
On-board NIC
- Engine
-
Intel® Xeon® Silver 4114 2.2GHZ
96 GB RAM
2 × 2 TB HDDs in RAID 1 (6 Gbps SATA)
On-board NIC
- Sensor — 1G Networks
-
Intel® Xeon® Silver 4114 2.2GHZ
32 GB RAM
2 × 2 TB HDDs in RAID 1 (6 Gbps SATA)
Intel I350 Quad port (or HPE 366T)
- Sensor — 10G Networks
-
2 × Intel® Xeon® Silver 4114 2.2GHZ
128 GB RAM
2 × 2 TB HDDs in RAID 1 (6 Gbps SATA)
Intel X710-DA2
- Analyst
-
2 × Intel® Xeon® Silver 4114 2.2GHZ
128 GB RAM
2 × 2 TB HDDs in RAID 1 (6 Gbps SATA)
On-board NIC
Appendix
Setup command options
The lastline_setup
command
provides a number of configuration options that are used to administer and manage the VMware
NSX Network Detection and
Response appliances.
Command line arguments
The lastline_setup
command supports the
following command line arguments:
- Help
-
-h, --help
Print the help message and exit.
- Acquire lock
-
--lock-timeout TIME
The
lastline_setup
command has a configuration lock to prevent more than one user from accessing its database at the same time. Set the amount of TIME (in seconds) to allow for acquiring the lock. The default is 0 (zero) seconds.
Configuration options
The available options varies depending on the type of appliance. The Manager has an extensive set whereas the Sensor has fewer options. To view all the
supported options for the current appliance, use the help
option.
To view a detailed description of individual options, type help
topic
, where topic
is
the name of a specific option.
The lastline_setup
command supports the
following configuration options:
- Maximum file upload size
-
analysis_max_upload_filesize_mb [= size]
Display or set the maximum file size (in MB) the system will accept for analysis. With no argument, display the current maximum file size allowance. If an argument is provided, set maximum file size allowance to the specified value. The argument
size
must be numeric. - Length of analysis queue
-
analysis_queue_backlog [= days | unlimited]
Display or set the number of days to keep unprocessed tasks in the analysis queue. With no argument, display the current number of days. The default is
unlimited
. If an argument is provided, set the number of days to the specified value. The argumentdays
can numeric orunlimited
(or0
). - AnonVPN DNS server
-
anonvpn_dns_server_ip [= IPaddr | -]
You can configure a DNS server specifically for AnonVPN to assist with anonymizing client connections.
Display or set the IP address for the DNS server for AnonVPN. With no argument, display the current IP address of the AnonVPN DNS server. If an argument is provided and is an IP address, set the DNS server to the specified value. You must provide a valid IPv4 address for the DNS service. This address must be reachable via the AnonVPN interface. If the argument is
-
(dash), clear (unset) the DNS server address. - AnonVPN mode
-
anonvpn_mode [= lastline | honeypot | custom | -]
Display or set the AnonVPN mode. With no argument, display the current setting. If an argument is provided and is one of lastline, honeypot, or custom, set the mode to the specified value.
If the value is
-
(dash), clear the mode (set to an empty value). This argument should not be used. - AnonVPN gateway
-
anonvpn_upstream_gateway_ip [= IPaddr | -]
Display or set the AnonVPN upstream gateway address. With no argument, display the current IP address of the gateway. If an argument is provided and is an IP address, set the gateway to the specified value. Any valid IPv4 address can be used for the gateway. This address must be in the same subnet as the IP address assigned to the AnonVPN interface. If the provided argument is
-
(dash), clear (unset) the gateway address.This setting is not required for point-to-point tunnel connections (for example, OpenVPN).
- AnonVPN interface
-
anonvpn_upstream_ifname [= interface | -]
Display or set the AnonVPN upstream interface. With no argument, display the current interface name. If an argument is provided, set the interface name to the specified value. You can specify any valid interface name other than
llanonvpn0
orllanonvpn1
. If the argument is-
(dash), clear the interface name (set to an empty value). - Appliance state
-
appliance_state
Display the appliance state. For example,
active
,error
,offline
, etc. - Appliance UUID
-
appliance_uuid
Display the appliance UUID. For example,
0123456789abcdef0123456789abcdef
. - Cloud analysis
-
cloud_analysis [= on | off]
Display or set analysis support. With no argument, display the current status. If an argument is provided, set cloud analysis support to the specified value. Possible values are on or off. When enabled, hashes (MD5, SHA1, and SHA256) of the analyzed artifacts are shared with the NSX Cloud.
- Download metadata for cloud analysis
-
cloud_analysis_push_download_metadata [= on | off]
Display or set support to allow sending artifact metadata (download origin, filename, type, etc.) to the NSX Cloud. With no argument, display the current status. If an argument is provided, set the download support to the specified value. Possible values are on or off. When enabled, the URL the artifact was downloaded from (HTTP, FTP, and SMB downloads) is sent to the VMware backend.
- Download URL for cloud analysis
-
cloud_analysis_push_download_source [= on | off]
Display or set support to allow sending artifact download origin to the NSX Cloud. With no argument, display the current status. If an argument is provided, set the download support to the specified value. Possible values are on or off. When enabled, the IP address and host name of the server the artifact was downloaded from are sent to the VMware backend.
- Query URL reputation from cloud analysis
-
cloud_analysis_query_url_reputation [= on | off]
Display or set support to allow requesting URL reputation data from the NSX Cloud. With no argument, display the current status. If an argument is provided, set the URL classification support to the specified value. Possible values are on or off. When enabled, the VMware backend is queried for reputation metadata that can be used to classify a URL. The full URL is shared with the VMware backend.
- Data retention for code
-
data_retention_code [= days | unlimited]
Display or set the number of days to retain Web-code captured during an analysis run of a submitted URL. With no argument, display the current number of days. If an argument is provided, set the number of days to the specified value. The argument
days
can numeric orunlimited
(or0
). - Data retention for generated files
-
data_retention_generated_files [= days | unlimited]
Display or set the number of days to retain files generated by a program during a dynamic analysis run. With no argument, display the current number of days. If an argument is provided, set the number of days to the specified value. The argument
days
can numeric orunlimited
(or0
). - Data retention for memory dumps
-
data_retention_memory_dumps [= days | unlimited]
Display or set the number of days to retain memory buffers allocated by a program during a dynamic analysis run. With no argument, display the current number of days. If an argument is provided, set the number of days to the specified value. The argument
days
can numeric orunlimited
(or0
). - Data retention for process dumps
-
data_retention_process_dumps [= days | unlimited]
Display or set the number of days to retain full-process snapshots of a program during a dynamic analysis run. With no argument, display the current number of days. If an argument is provided, set the number of days to the specified value. The argument
days
can numeric orunlimited
(or0
). - Data retention for screenshots
-
data_retention_screenshots [= days | unlimited]
Display or set the number of days to retain screenshots taken during a dynamic analysis run. With no argument, display the current number of days. If an argument is provided, set the number of days to the specified value. The argument
days
can numeric orunlimited
(or0
). - Data retention for traffic captures
-
data_retention_traffic_captures [= days | unlimited]
Display or set the number of days to retain network traffic captured during a dynamic analysis run. With no argument, display the current number of days. If an argument is provided, set the number of days to the specified value. The argument
days
can numeric orunlimited
(or0
). - Data retention for uploads
-
data_retention_uploads [= days | unlimited]
Display or set the number of days to retain files uploaded for analysis. With no argument, display the current number of days. If an argument is provided, set the number of days to the specified value. The argument
days
can numeric orunlimited
(or0
). - Data retention for webpages
-
data_retention_webpages [= days | unlimited]
Display or set the number of days to retain Web page content captured during a dynamic analysis run. With no argument, display the current number of days. If an argument is provided, set the number of days to the specified value. The argument
days
can numeric orunlimited
(or0
). - Comment on analysis reports
-
disable_report_commenting [= true | false | -]
Display or set the ability to comment on analysis reports. With no argument, display the current status. If an argument is provided, set the ability to comment to the specified value. Possible values are
true
orfalse
. If the argument is-
(dash), clear the field (this is the same as setting the value tofalse
). - Disable the support channel
-
disable_support_channel [= true | false | -]
Display or set the support channel. With no argument, display the current status. If an argument is provided, set the support channel to the specified value. Possible values are
true
orfalse
. The default (false
) allows VMware Support to perform remote administration assistance at your request. If the argument is-
(dash), clear the field (this is the same as setting the value tofalse
). - Edit variables
-
edit [variable]
Edit the value stored for the entered variable. A prompt for entering a new value for the variable is displayed. If the variable being edited is a password variable, your input will not be displayed.
To view a list of the variables available for editing, run the
edit
option with no argument. - Email relay host
-
email_relay_host [= IPaddr | hostname | -]
Display or set the host name or IP address for the SMTP relay host. With no argument, display the current host. If an argument is provided, set the host to the specified value. If the argument is
-
(dash), clear (unset) the host. In this case, the VMware backend is used. - Email relay password
-
email_relay_password [= password | -]
Display or set the authentication password for the SMTP relay host. With no argument, display the current password. If an argument is provided, set the password to the specified value. If the argument is
-
(dash), clear (unset) the password. - Email relay port
-
email_relay_port [= port | -]
Display or set the port number for the SMTP relay host. With no argument, display the current port. If an argument is provided, set the port to the specified value. If the argument is
-
(dash), clear (unset) the port. - Email relay username
-
email_relay_username [= username | -]
Display or set the username for the SMTP relay host. With no argument, display the current username. If an argument is provided, set the username to the specified value. If the argument is
-
(dash), clear (unset) the username. - Email sender address
-
email_sender_address [= address | -]
Display or set the email address to be used for delivering email. With no argument, display the current email sender address. If an argument is provided, set the sender address to the specified value. If the argument is
-
(dash), clear (unset) the sender address. - Failover multicast address
-
failover_multicast_address [= address | -]
Display or set the multicast address needed by the tools used for managing the shared virtual IP between active and standby Manager in an active/standby configuration. With no argument, display the current value of the failover multicast address. If an argument is provided, set the address to the specified value. If the argument is
-
(dash), clear (unset) the failover multicast address. - Failover multicast port
-
failover_multicast_port [= address | -]
Display or set the multicast port needed by the tools used for managing the shared virtual IP between active and standby Manager in an active/standby configuration. With no argument, display the current value of the failover multicast port. If an argument is provided, set the port number to the specified value. If the argument is
-
(dash), clear (unset) the failover multicast port.There is no standard multicast port number. VMware NSX Network Detection and Response uses
5405
as its default. - Failover virtual IP address
-
failover_virtual_ip [= address | -]
Display or set the virtual IP address shared between active and standby Manager in an active/standby configuration. With no argument, display the current value of the virtual IP address. If an argument is provided, set the virtual IP address to the specified value. If the argument is
-
(dash), clear (unset) the virtual IP address. - Fully qualified domain name
-
fqdn
Display the fully qualified domain name of the appliance.
- Active manager priority
-
ha_active_priority [= priority | -]
Display or set the priority of the active Manager for the purposes of determining ownership of the shared virtual IP address in an active/standby configuration. Select a value higher than the highest priority recently used for this virtual IP address.
With no argument, display the current value of the active manager priority. If an argument is provided, set the priority to the specified value. If the argument is
-
(dash), clear (unset) the active manager priority. - Active manager password
-
ha_password [= password | -]
Display or set the password for managing the virtual IP address shared between active and standby Manager in an active/standby configuration. With no argument, display the current active/standby password (displayed as
***
). If an argument is provided, set the password to the specified value. If the argument is-
(dash), clear the active/standby password (set to empty value). - HTTPS proxy
-
https_proxy [= proxy_address:port | -]
Display or set the HTTPS proxy. With no argument, display the current proxy. If an argument is provided, set the proxy to the specified value. The HTTPS proxy must be in the format
proxy_address:port
(for example,proxy.example.com:8080
or192.168.0.1:443
). If the argument is-
(dash), clear (unset) the proxy. - Replace branding images
-
image_brand_replacement [= on | off]
This feature is provided for partners who wish to replace the VMware logo and other assets with their own.
Display or set the status of brand images replacement policy. With no argument, display the current status. If an argument is provided, set the policy to the specified value. Possible values are on or off. When enabled, the Manager will display the replacement visual assets in its hosted User Portal. These files must be located in the
/home/lastline/brand_replacement_files/
directory. - Inject interface
-
inject_interface [= interface | -]
Display or set the interface used for injecting blocking packets according to the configured modes, for example, TCP RST packet, DNS NXDOMAIN response, HTTP 302 redirect, etc. With no argument, display the current interface name. If an argument is provided, set the interface name to the specified value. You can specify any valid interface name, for example
eth1
. If the argument is-
(dash), clear the interface name (set to an empty value). - Inline interfaces
-
inline_interfaces [= interface-interface, interface-interface, ... | -]
Display or set the list of interface pairs used for inline mode. With no argument, display the current interface pairs. If an argument is provided, set the interfaces to the specified value. Specify a comma-separated list of interface pairs, for example
eth1-eth2, eth3-eth4
. If the argument is-
(dash), clear the interface pairs (set to an empty value). - License API token
-
license_api_token
Display the On-Premises license API token.
- License key
-
license_key
Display the On-Premises license key.
- Update server override
-
llama_images_server_override [= IPaddr | hostname | -]
Display or set the host name or IP address for the server from which to download LLAMA images. With no argument, display the current host. If an argument is provided, set the server to the specified value. If the argument is
-
(dash), clear (unset) the server.This option is provided for installations that must substitute another server for the default
update.lastline.com
. - Manager domain name
-
manager [= domain name | -]
Display or set the domain name of the Manager. With no argument, display the current value for
manager
. If an argument is provided, setmanager
to the specified value. If the argument is-
(dash), clear (unset) the server.In most instances, you should leave this field to its default value of
lastline.com
or for an On-Premises installation, the fqdn of the local Manager. If you must change this entry, enter the domain name of the Manager you want to connect to. If you uselastline.example.com
, for example,update.lastline.example.com
andlog.lastline.example.com
should be additional aliases for the same IP address in your default DNS server. - Monitoring user
-
monitoring_user_password [= password | -]
Enable or disable the monitoring user. With no argument, display the current state. If an argument is provided, set the monitoring user password to the specified value. If the argument is
-
(dash), disable password-based authentication. - Network parameters
-
network [= variable value]
Display or set the network parameters of the appliance. There are two network methods: DHCP or static. With no argument, display the current network settings. For example:
DHCP settings
network interface = eth0 network method = dhcp
Static settings
network dns_nameservers = 8.8.8.8 8.8.4.4 network gateway = 10.0.2.2 network netmask = 255.255.255.0 network address = 10.0.2.15 network interface = eth0 network method = static
The
network
option has a number of variables:-
network interface
— Set the interface used for network access.network interface interface
-
network method
— Set the network method. Fordhcp
, the appliance gets its address and other network information from a DHCP server. Forstatic
, you define all the network parameters.network method dhcp | static
-
network address
— For a static configuration, set the IPv4 address of the interface.network address IPaddr
-
network netmask
— For a static configuration, set the dotted-quad netmask of the interface.network netmask netmask
-
network gateway
— For a static configuration, set the IP address of the default gateway for network access. If the argument is-
(dash), set the gateway address toNone
.network gateway [IPaddr | -
-
network dns_nameservers
— For a static configuration, enter a list of space separated IP addresses for the DNS servers. If the argument is-
(dash), set the DNS servers toNone
.network dns_nameservers [IPaddr IPaddr ... | -
-
- Monitoring user
-
new_monitoring_user_password [= password | -]
Enable or disable access to the appliance for the monitoring user. With no argument, display the current monitoring user password (displayed as
***
). If an argument is provided, set the monitoring user password to the specified value. If the argument is-
(dash), clear the monitoring user password (set to empty value). - NTP servers
-
ntp_servers [= IPaddr,IPaddr,... | -
Display or set the NTP servers list. With no argument, display the current value for the NTP servers list. If an argument is provided, set the NTP servers list to the specified value. The NTP server addresses must be comma separated. If the argument is
-
(dash), clear (unset) the NTP servers list. - Offline mode
-
offline_mode
Display offline mode. This allows the appliance to work without an Internet connection.
- Save
-
save [skip_apply] [skip_network_restart]
Save your changes, apply the new configuration, and exit.
If
skip-restart-network
is specified, the network will not be restarted and therefore any changed network settings will be saved but not applied.If
skip_apply
is specified, the new configuration will be saved but not applied. You can later run thelastline_apply_config
command to make the new configuration effective. - Sensor subkey
-
sensor_subkey
Display the Sensor subkey. To change this value, the Sensor must be deregistered, and then re-registered using the
lastline_register
command. - Show configuration
-
show
Display the current configuration. For example, the configuration of a Sensor:
-> show anonymization_password = *** appliance_state = active appliance_uuid = 046cf54cb3d46eab0c3263724cd56b6a disable_support_channel = https_proxy = inject_interface = eth2 inline_interfaces = license_key = 0Z6LLNOU4ZP12BWBTOJ0 manager = manager.lastline.example.com monitoring_user_password: enabled network interface = eth0 network method = dhcp new_monitoring_user_password = *** ntp_server = update.lastline.com ntp_servers = update.lastline.com sensor_subkey = sensor01 sniffing_interfaces = eth2
- Sniffing interface
-
sniffing_interface [= interface, interface, ... | -]
Display or set the list of interfaces the Sensor should monitor. With no argument, display the current interfaces. If an argument is provided, set the interfaces to the specified value. Specify a comma-separated list of interface names, for example
eth1, eth2
. If the argument is-
(dash), clear the sniffing interfaces (set to an empty value). - Replace branding images
-
text_brand_replacement [= JSON]
This feature is provided for partners who wish to replace the VMware logo and other assets with their own.
Display or set the brand text replacement using JSON. With no argument, display the current JSON. If an argument is provided, set the brand text to the specified value.
Your JSON content should technically be a single line. For example:
text_brand_replacement = {"company_short_name_ascii":"llPartner","company_short_name_utf8":"エロパタナ"}
Exit options
To quit from the lastline_setup
command
without saving your changes, type exit
.
If you made changes that you want applied, you must use the save
option to update the appliance database and configuration.
It then quits the lastline_setup
command.