Ubuntu Server Guide Changes, errors and bugs
Download 1.27 Mb. Pdf ko'rish
|
ubuntu-server-guide
- Bu sahifa navigatsiya:
- Generating a Certificate Signing Request (CSR)
- Creating a Self-Signed Certificate
- Installing the Certificate
- Usage Find available images
- Launch a fresh instance of the current Ubuntu LTS
- Check out the running instances
- Learn more about the VM instance you just launched
- Connect to a running instance
- Run commands inside an instance from outside
- Stop an instance to save resources $ m u l t i p a s s s t o p dancing−chipmunk Delete the instance
- Integrate into the rest of your virtualization
- Get help m u l t i p a s s h e l p m u l t i p a s s h e l p See the multipass documentation for more details. Qemu
- Upgrading the machine type
- Virtual Machine Management
- Device Passthrough / Hotplug
Introduction The Linux kernel includes the Netfilter subsystem, which is used to manipulate or decide the fate of network traffic headed into or through your server. All modern Linux firewall solutions use this system for packet filtering. The kernel’s packet filtering system would be of little use to administrators without a userspace interface to manage it. This is the purpose of iptables: When a packet reaches your server, it will be handed off to the Netfilter subsystem for acceptance, manipulation, or rejection based on the rules supplied to it from userspace via iptables. Thus, iptables is all you need to manage your firewall, if you’re familiar with it, but many frontends are available to simplify the task. 85 ufw - Uncomplicated Firewall The default firewall configuration tool for Ubuntu is ufw. Developed to ease iptables firewall configuration, ufw provides a user-friendly way to create an IPv4 or IPv6 host-based firewall. ufw by default is initially disabled. From the ufw man page: “ufw is not intended to provide complete firewall functionality via its command interface, but instead provides an easy way to add or remove simple rules. It is currently mainly used for host-based firewalls.” The following are some examples of how to use ufw: • First, ufw needs to be enabled. From a terminal prompt enter: sudo ufw e n a b l e • To open a port (SSH in this example): sudo ufw a l l o w 22 • Rules can also be added using a numbered format: sudo ufw i n s e r t 1 a l l o w 80 • Similarly, to close an opened port: sudo ufw deny 22 • To remove a rule, use delete followed by the rule: sudo ufw d e l e t e deny 22 • It is also possible to allow access from specific hosts or networks to a port. The following example allows SSH access from host 192.168.0.2 to any IP address on this host: sudo ufw a l l o w p r o t o t c p from 1 9 2 . 1 6 8 . 0 . 2 t o any p o r t 22 Replace 192.168.0.2 with 192.168.0.0/24 to allow SSH access from the entire subnet. • Adding the –dry-run option to a ufw command will output the resulting rules, but not apply them. For example, the following is what would be applied if opening the HTTP port: sudo ufw −−dry−run a l l o w h t t p * f i l t e r : ufw−u s e r −i n p u t − [ 0 : 0 ] : ufw−u s e r −output − [ 0 : 0 ] : ufw−u s e r −f o r w a r d − [ 0 : 0 ] : ufw−u s e r −l i m i t − [ 0 : 0 ] : ufw−u s e r −l i m i t −a c c e p t − [ 0 : 0 ] ### RULES ### ### t u p l e ### a l l o w t c p 80 0 . 0 . 0 . 0 / 0 any 0 . 0 . 0 . 0 / 0 −A ufw−u s e r −i n p u t −p t c p −−d p o r t 80 −j ACCEPT ### END RULES ### −A ufw−u s e r −i n p u t −j RETURN −A ufw−u s e r −output −j RETURN −A ufw−u s e r −f o r w a r d −j RETURN −A ufw−u s e r −l i m i t −m l i m i t −−l i m i t 3/ minute −j LOG −−l o g −p r e f i x ” [UFW LIMIT ] : ” 86 −A ufw−u s e r −l i m i t −j REJECT −A ufw−u s e r −l i m i t −a c c e p t −j ACCEPT COMMIT Rules updated • ufw can be disabled by: sudo ufw d i s a b l e • To see the firewall status, enter: sudo ufw s t a t u s • And for more verbose status information use: sudo ufw s t a t u s v e r b o s e • To view the numbered format: sudo ufw s t a t u s numbered Note If the port you want to open or close is defined in /etc/ services , you can use the port name instead of the number. In the above examples, replace 22 with ssh. This is a quick introduction to using ufw. Please refer to the ufw man page for more information. ufw Application Integration Applications that open ports can include an ufw profile, which details the ports needed for the application to function properly. The profiles are kept in /etc/ufw/applications.d, and can be edited if the default ports have been changed. • To view which applications have installed a profile, enter the following in a terminal: sudo ufw app l i s t • Similar to allowing traffic to a port, using an application profile is accomplished by entering: sudo ufw a l l o w Samba • An extended syntax is available as well: ufw a l l o w from 1 9 2 . 1 6 8 . 0 . 0 / 2 4 t o any app Samba Replace Samba and 192.168.0.0/24 with the application profile you are using and the IP range for your network. Note There is no need to specify the protocol for the application, because that information is detailed in the profile. Also, note that the app name replaces the port number. • To view details about which ports, protocols, etc., are defined for an application, enter: sudo ufw app i n f o Samba Not all applications that require opening a network port come with ufw profiles, but if you have profiled an application and want the file to be included with the package, please file a bug against the package in Launchpad. ubuntu−bug nameofpackage 87 IP Masquerading The purpose of IP Masquerading is to allow machines with private, non-routable IP addresses on your network to access the Internet through the machine doing the masquerading. Traffic from your private network destined for the Internet must be manipulated for replies to be routable back to the machine that made the request. To do this, the kernel must modify the source IP address of each packet so that replies will be routed back to it, rather than to the private IP address that made the request, which is impossible over the Internet. Linux uses Connection Tracking (conntrack) to keep track of which connections belong to which machines and reroute each return packet accordingly. Traffic leaving your private network is thus “masqueraded” as having originated from your Ubuntu gateway machine. This process is referred to in Microsoft documentation as Internet Connection Sharing. ufw Masquerading IP Masquerading can be achieved using custom ufw rules. This is possible because the current back-end for ufw is iptables-restore with the rules files located in /etc/ufw/*.rules. These files are a great place to add legacy iptables rules used without ufw, and rules that are more network gateway or bridge related. The rules are split into two different files, rules that should be executed before ufw command line rules, and rules that are executed after ufw command line rules. • First, packet forwarding needs to be enabled in ufw. Two configuration files will need to be adjusted, in /etc/default/ufw change the DEFAULT_FORWARD_POLICY to “ACCEPT”: DEFAULT_FORWARD_POLICY=”ACCEPT” Then edit /etc/ufw/sysctl.conf and uncomment: n e t / i p v 4 / ip_forward=1 Similarly, for IPv6 forwarding uncomment: n e t / i p v 6 / c o n f / d e f a u l t / f o r w a r d i n g=1 • Now add rules to the /etc/ufw/before.rules file. The default rules only configure the filter table, and to enable masquerading the nat table will need to be configured. Add the following to the top of the file just after the header comments: # nat Table r u l e s * nat :POSTROUTING ACCEPT [ 0 : 0 ] # Forward t r a f f i c from e t h 1 through e t h 0 . −A POSTROUTING −s 1 9 2 . 1 6 8 . 0 . 0 / 2 4 −o e t h 0 −j MASQUERADE # don ’ t d e l e t e t h e ’COMMIT’ l i n e o r t h e s e nat t a b l e r u l e s won ’ t be p r o c e s s e d COMMIT The comments are not strictly necessary, but it is considered good practice to document your configu- ration. Also, when modifying any of the rules files in /etc/ufw, make sure these lines are the last line for each table modified: # don ’ t d e l e t e t h e ’COMMIT’ l i n e o r t h e s e r u l e s won ’ t be p r o c e s s e d COMMIT For each Table a corresponding COMMIT statement is required. In these examples only the nat and filter tables are shown, but you can also add rules for the raw and mangle tables. 88 Note In the above example replace eth0, eth1, and 192.168.0.0/24 with the appropriate interfaces and IP range for your network. • Finally, disable and re-enable ufw to apply the changes: sudo ufw d i s a b l e && sudo ufw e n a b l e IP Masquerading should now be enabled. You can also add any additional FORWARD rules to the /etc/ ufw/before.rules. It is recommended that these additional rules be added to the ufw-before-forward chain. iptables Masquerading iptables can also be used to enable Masquerading. • Similar to ufw, the first step is to enable IPv4 packet forwarding by editing /etc/ sysctl .conf and uncomment the following line: n e t . i p v 4 . ip_forward=1 If you wish to enable IPv6 forwarding also uncomment: n e t . i p v 6 . c o n f . d e f a u l t . f o r w a r d i n g=1 • Next, execute the sysctl command to enable the new settings in the configuration file: sudo s y s c t l −p • IP Masquerading can now be accomplished with a single iptables rule, which may differ slightly based on your network configuration: sudo i p t a b l e s −t nat −A POSTROUTING −s 1 9 2 . 1 6 8 . 0 . 0 / 1 6 −o ppp0 −j MASQUERADE The above command assumes that your private address space is 192.168.0.0/16 and that your Internet- facing device is ppp0. The syntax is broken down as follows: – -t nat – the rule is to go into the nat table – -A POSTROUTING – the rule is to be appended (-A) to the POSTROUTING chain – -s 192.168.0.0/16 – the rule applies to traffic originating from the specified address space – -o ppp0 – the rule applies to traffic scheduled to be routed through the specified network device – -j MASQUERADE – traffic matching this rule is to “jump” (-j) to the MASQUERADE target to be manipulated as described above • Also, each chain in the filter table (the default table, and where most or all packet filtering occurs) has a default policy of ACCEPT, but if you are creating a firewall in addition to a gateway device, you may have set the policies to DROP or REJECT, in which case your masqueraded traffic needs to be allowed through the FORWARD chain for the above rule to work: sudo i p t a b l e s −A FORWARD −s 1 9 2 . 1 6 8 . 0 . 0 / 1 6 −o ppp0 −j ACCEPT sudo i p t a b l e s −A FORWARD −d 1 9 2 . 1 6 8 . 0 . 0 / 1 6 −m s t a t e \ −−s t a t e ESTABLISHED,RELATED − i ppp0 −j ACCEPT The above commands will allow all connections from your local network to the Internet and all traffic related to those connections to return to the machine that initiated them. 89 • If you want masquerading to be enabled on reboot, which you probably do, edit /etc/rc. local and add any commands used above. For example add the first command with no filtering: i p t a b l e s −t nat −A POSTROUTING −s 1 9 2 . 1 6 8 . 0 . 0 / 1 6 −o ppp0 −j MASQUERADE Logs Firewall logs are essential for recognizing attacks, troubleshooting your firewall rules, and noticing unusual activity on your network. You must include logging rules in your firewall for them to be generated, though, and logging rules must come before any applicable terminating rule (a rule with a target that decides the fate of the packet, such as ACCEPT, DROP, or REJECT). If you are using ufw, you can turn on logging by entering the following in a terminal: sudo ufw l o g g i n g on To turn logging off in ufw, simply replace on with off in the above command. If using iptables instead of ufw, enter: sudo i p t a b l e s −A INPUT −m s t a t e −−s t a t e NEW −p t c p −−d p o r t 80 \ −j LOG −−l o g −p r e f i x ”NEW_HTTP_CONN: ” A request on port 80 from the local machine, then, would generate a log in dmesg that looks like this (single line split into 3 to fit this document): [ 4 3 0 4 8 8 5 . 8 7 0 0 0 0 ] NEW_HTTP_CONN: IN=l o OUT= MAC = 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 8 : 0 0 SRC= 1 2 7 . 0 . 0 . 1 DST= 1 2 7 . 0 . 0 . 1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=58288 DF PROTO =TCP SPT=53981 DPT=80 WINDOW=32767 RES=0x00 SYN URGP=0 The above log will also appear in /var/log/messages, /var/log/syslog, and /var/log/kern.log. This behavior can be modified by editing /etc/syslog .conf appropriately or by installing and configuring ulogd and using the ULOG target instead of LOG. The ulogd daemon is a userspace server that listens for logging instructions from the kernel specifically for firewalls, and can log to any file you like, or even to a PostgreSQL or MySQL database. Making sense of your firewall logs can be simplified by using a log analyzing tool such as logwatch, fwanalog, fwlogwatch, or lire. Other Tools There are many tools available to help you construct a complete firewall without intimate knowledge of iptables. A command-line tool with plain-text configuration files: • Shorewall is a very powerful solution to help you configure an advanced firewall for any network. References • The Ubuntu Firewall wiki page contains information on the development of ufw. • Also, the ufw manual page contains some very useful information: man ufw. • See the packet-filtering-HOWTO for more information on using iptables. • The nat-HOWTO contains further details on masquerading. • The IPTables HowTo in the Ubuntu wiki is a great resource. 90 Certificates One of the most common forms of cryptography today is public-key cryptography. Public-key cryptography utilizes a public key and a private key. The system works by encrypting information using the public key. The information can then only be decrypted using the private key. A common use for public-key cryptography is encrypting application traffic using a Secure Socket Layer (SSL) or Transport Layer Security (TLS) connection. One example: configuring Apache to provide HTTPS, the HTTP protocol over SSL/TLS. This allows a way to encrypt traffic using a protocol that does not itself provide encryption. A certificate is a method used to distribute a public key and other information about a server and the organization who is responsible for it. Certificates can be digitally signed by a Certification Authority, or CA. A CA is a trusted third party that has confirmed that the information contained in the certificate is accurate. Types of Certificates To set up a secure server using public-key cryptography, in most cases, you send your certificate request (including your public key), proof of your company’s identity, and payment to a CA. The CA verifies the certificate request and your identity, and then sends back a certificate for your secure server. Alternatively, you can create your own self-signed certificate. Note Note that self-signed certificates should not be used in most production environments. Continuing the HTTPS example, a CA-signed certificate provides two important capabilities that a self- signed certificate does not: • Browsers (usually) automatically recognize the CA signature and allow a secure connection to be made without prompting the user. • When a CA issues a signed certificate, it is guaranteeing the identity of the organization that is providing the web pages to the browser. Most of the software supporting SSL/TLS have a list of CAs whose certificates they automatically accept. If a browser encounters a certificate whose authorizing CA is not in the list, the browser asks the user to either accept or decline the connection. Also, other applications may generate an error message when using a self-signed certificate. The process of getting a certificate from a CA is fairly easy. A quick overview is as follows: 1. Create a private and public encryption key pair. 2. Create a certificate signing request based on the public key. The certificate request contains information about your server and the company hosting it. 3. Send the certificate request, along with documents proving your identity, to a CA. We cannot tell you which certificate authority to choose. Your decision may be based on your past experiences, or on the experiences of your friends or colleagues, or purely on monetary factors. Once you have decided upon a CA, you need to follow the instructions they provide on how to obtain a certificate from them. 4. When the CA is satisfied that you are indeed who you claim to be, they send you a digital certificate. 5. Install this certificate on your secure server, and configure the appropriate applications to use the certificate. 91 Generating a Certificate Signing Request (CSR) Whether you are getting a certificate from a CA or generating your own self-signed certificate, the first step is to generate a key. If the certificate will be used by service daemons, such as Apache, Postfix, Dovecot, etc., a key without a passphrase is often appropriate. Not having a passphrase allows the services to start without manual intervention, usually the preferred way to start a daemon. This section will cover generating a key with a passphrase, and one without. The non-passphrase key will then be used to generate a certificate that can be used with various service daemons. Warning Running your secure service without a passphrase is convenient because you will not need to enter the passphrase every time you start your secure service. But it is insecure and a compromise of the key means a compromise of the server as well. To generate the keys for the Certificate Signing Request (CSR) run the following command from a terminal prompt: o p e n s s l g e n r s a −d e s 3 −out s e r v e r . key 2048 G e n e r a t i n g RSA p r i v a t e key , 2048 b i t l o n g modulus . . . . . . . . . . . . . . . . . . . . . . . . . . + + + + + + .......++++++ e i s 65537 ( 0 x10001 ) Enter p a s s p h r a s e f o r s e r v e r . key : You can now enter your passphrase. For best security, it should at least contain eight characters. The minimum length when specifying −des3 is four characters. As a best practice it should include numbers and/or punctuation and not be a word in a dictionary. Also remember that your passphrase is case-sensitive. Re-type the passphrase to verify. Once you have re-typed it correctly, the server key is generated and stored in the server .key file. Now create the insecure key, the one without a passphrase, and shuffle the key names: o p e n s s l r s a −i n s e r v e r . key −out s e r v e r . key . i n s e c u r e mv s e r v e r . key s e r v e r . key . s e c u r e mv s e r v e r . key . i n s e c u r e s e r v e r . key The insecure key is now named server .key, and you can use this file to generate the CSR without passphrase. To create the CSR, run the following command at a terminal prompt: o p e n s s l r e q −new −key s e r v e r . key −out s e r v e r . c s r It will prompt you enter the passphrase. If you enter the correct passphrase, it will prompt you to enter Company Name, Site Name, Email Id, etc. Once you enter all these details, your CSR will be created and it will be stored in the server . csr file. You can now submit this CSR file to a CA for processing. The CA will use this CSR file and issue the certificate. On the other hand, you can create self-signed certificate using this CSR. Creating a Self-Signed Certificate To create the self-signed certificate, run the following command at a terminal prompt: o p e n s s l x509 −r e q −days 365 −i n s e r v e r . c s r −s i g n k e y s e r v e r . key −out s e r v e r . c r t 92 The above command will prompt you to enter the passphrase. Once you enter the correct passphrase, your certificate will be created and it will be stored in the server . crt file. Warning If your secure server is to be used in a production environment, you probably need a CA-signed certificate. It is not recommended to use self-signed certificate. Installing the Certificate You can install the key file server .key and certificate file server . crt, or the certificate file issued by your CA, by running following commands at a terminal prompt: sudo cp s e r v e r . c r t / e t c / s s l / c e r t s sudo cp s e r v e r . key / e t c / s s l / p r i v a t e Now simply configure any applications, with the ability to use public-key cryptography, to use the certificate and key files. For example, Apache can provide HTTPS, Dovecot can provide IMAPS and POP3S, etc. Certification Authority If the services on your network require more than a few self-signed certificates it may be worth the additional effort to setup your own internal Certification Authority (CA). Using certificates signed by your own CA, allows the various services using the certificates to easily trust other services using certificates issued from the same CA. First, create the directories to hold the CA certificate and related files: sudo mkdir / e t c / s s l /CA sudo mkdir / e t c / s s l / n e w c e r t s The CA needs a few additional files to operate, one to keep track of the last serial number used by the CA, each certificate must have a unique serial number, and another file to record which certificates have been issued: sudo sh −c ” echo ’ 0 1 ’ > / e t c / s s l /CA/ s e r i a l ” sudo touch / e t c / s s l /CA/ i n d e x . t x t The third file is a CA configuration file. Though not strictly necessary, it is very convenient when issuing multiple certificates. Edit /etc/ ssl /openssl.cnf, and in the [ CA_default ] change: d i r = / e t c / s s l # Where e v e r y t h i n g i s kept d a t a b a s e = $ d i r /CA/ i n d e x . t x t # d a t a b a s e i n d e x f i l e . c e r t i f i c a t e = $ d i r / c e r t s / c a c e r t . pem # The CA c e r t i f i c a t e s e r i a l = $ d i r /CA/ s e r i a l # The c u r r e n t s e r i a l number p r i v a t e _ k e y = $ d i r / p r i v a t e / cakey . pem# The p r i v a t e key Next, create the self-signed root certificate: o p e n s s l r e q −new −x509 −e x t e n s i o n s v3_ca −keyout cakey . pem −out c a c e r t . pem − days 3650 You will then be asked to enter the details about the certificate. Now install the root certificate and key: sudo mv cakey . pem / e t c / s s l / p r i v a t e / sudo mv c a c e r t . pem / e t c / s s l / c e r t s / 93 You are now ready to start signing certificates. The first item needed is a Certificate Signing Request (CSR), see Generating a Certificate Signing Request (CSR) for details. Once you have a CSR, enter the following to generate a certificate signed by the CA: sudo o p e n s s l ca −i n s e r v e r . c s r −c o n f i g / e t c / s s l / o p e n s s l . c n f After entering the password for the CA key, you will be prompted to sign the certificate, and again to commit the new certificate. You should then see a somewhat large amount of output related to the certificate creation. There should now be a new file, /etc/ ssl /newcerts/01.pem, containing the same output. Copy and paste everything beginning with the line: —–BEGIN CERTIFICATE—– and continuing through the line: —- END CERTIFICATE—– lines to a file named after the hostname of the server where the certificate will be installed. For example mail.example.com.crt, is a nice descriptive name. Subsequent certificates will be named 02.pem, 03.pem, etc. Note Replace mail.example.com.crt with your own descriptive name. Finally, copy the new certificate to the host that needs it, and configure the appropriate applications to use it. The default location to install certificates is /etc/ ssl /certs. This enables multiple services to use the same certificate without overly complicated file permissions. For applications that can be configured to use a CA certificate, you should also copy the /etc/ ssl /certs/ cacert .pem file to the /etc/ ssl /certs/ directory on each server. References • The Wikipedia HTTPS page has more information regarding HTTPS. • For more information on OpenSSL see the OpenSSL Home Page. • Also, O’Reilly’s Network Security with OpenSSL is a good in-depth reference. Console Security As with any other security barrier you put in place to protect your server, it is pretty tough to defend against untold damage caused by someone with physical access to your environment, for example, theft of hard drives, power or service disruption, and so on. Therefore, console security should be addressed merely as one component of your overall physical security strategy. A locked “screen door” may deter a casual criminal, or at the very least slow down a determined one, so it is still advisable to perform basic precautions with regard to console security. The following instructions will help defend your server against issues that could otherwise yield very serious consequences. Disable Ctrl+Alt+Delete Anyone that has physical access to the keyboard can simply use the Ctrl+Alt+Delete key combination to reboot the server without having to log on. While someone could simply unplug the power source, you should still prevent the use of this key combination on a production server. This forces an attacker to take more drastic measures to reboot the server, and will prevent accidental reboots at the same time. To disable the reboot action taken by pressing the Ctrl+Alt+Delete key combination, run the following two commands: 94 sudo s y s t e m c t l mask c t r l −a l t −d e l . t a r g e t sudo s y s t e m c t l daemon−r e l o a d eCryptfs is deprecated eCryptfs is deprecated and should not be used, instead the LUKS setup as defined by the Ubuntu installer is recommended. That in turn - for a typical remote server setup will need a remote key store as usually no one is there to enter a key on boot. Virtualization is being adopted in many different environments and situations. If you are a developer, virtualization can provide you with a contained environment where you can safely do almost any sort of development safe from messing up your main working environment. If you are a systems administrator, you can use virtualization to more easily separate your services and move them around based on demand. The default virtualization technology supported in Ubuntu is KVM. For Intel and AMD hardware KVM requires virtualization extensions. But KVM is also available for IBM Z and LinuxONE, IBM POWER as well as for ARM64. Qemu is part of the KVM experience being the userspace backend for it, but it also can be used for hardware without virtualization extensions by using its TCG mode. While virtualization is in many ways similar to containers those are different and implemented via other solutions like LXD, systemd-nspawn, containerd and others. Multipass is the recommended method to create Ubuntu VMs on Ubuntu. It’s designed for developers who want a fresh Ubuntu environment with a single command and works on Linux, Windows and macOS. On Linux it’s available as a snap: sudo snap i n s t a l l m u l t i p a s s −−b e t a −− c l a s s i c Usage Find available images $ m u l t i p a s s f i n d Image A l i a s e s V e r s i o n D e s c r i p t i o n c o r e c o r e 1 6 20190424 Ubuntu Core 16 c o r e 1 8 20190213 Ubuntu Core 18 1 6 . 0 4 x e n i a l 20190628 Ubuntu 1 6 . 0 4 LTS 1 8 . 0 4 b i o n i c , l t s 2 0 1 9 0 6 2 7 . 1 Ubuntu 1 8 . 0 4 LTS 1 8 . 1 0 c o s m i c 20190628 Ubuntu 1 8 . 1 0 1 9 . 0 4 d i s c o 20190628 Ubuntu 1 9 . 0 4 d a i l y : 1 9 . 1 0 d e v e l , eoan 20190623 Ubuntu 1 9 . 1 0 Launch a fresh instance of the current Ubuntu LTS $ m u l t i p a s s l a u n c h ubuntu Launching dancing−chipmunk . . . Downloading Ubuntu 1 8 . 0 4 LTS . . . . . . . . . . Launched : d a n c i n g chipmunk Check out the running instances 95 $ m u l t i p a s s l i s t Name S t a t e IPv4 R e l e a s e dancing−chipmunk RUNNING 1 0 . 1 2 5 . 1 7 4 . 2 4 7 Ubuntu 1 8 . 0 4 LTS l i v e −n a i a d RUNNING 1 0 . 1 2 5 . 1 7 4 . 2 4 3 Ubuntu 1 8 . 0 4 LTS s n a p c r a f t −a s c i i n e m a STOPPED −− Ubuntu S n a p c r a f t b u i l d e r f o r Core 18 Learn more about the VM instance you just launched $ m u l t i p a s s i n f o dancing−chipmunk Name : dancing−chipmunk S t a t e : RUNNING IPv4 : 1 0 . 1 2 5 . 1 7 4 . 2 4 7 R e l e a s e : Ubuntu 1 8 . 0 4 . 1 LTS Image hash : 19 e9853d8267 ( Ubuntu 1 8 . 0 4 LTS) Load : 0 . 9 7 0 . 3 0 0 . 1 0 Disk u s a g e : 1 . 1G out o f 4 . 7G Memory u s a g e : 8 5 . 1M out o f 9 8 5 . 4M Connect to a running instance $ m u l t i p a s s s h e l l dancing−chipmunk Welcome t o Ubuntu 1 8 . 0 4 . 1 LTS (GNU/ Linux 4.15.0 −42 − g e n e r i c x86_64 ) . . . Don’t forget to logout (or Ctrl-D) or you may find yourself heading all the way down the Inception levels… ;) Run commands inside an instance from outside $ m u l t i p a s s e x e c dancing−chipmunk −− l s b _ r e l e a s e −a No LSB modules a r e a v a i l a b l e . D i s t r i b u t o r ID : Ubuntu D e s c r i p t i o n : Ubuntu 1 8 . 0 4 . 1 LTS R e l e a s e : 1 8 . 0 4 Codename : b i o n i c Stop an instance to save resources $ m u l t i p a s s s t o p dancing−chipmunk Delete the instance $ m u l t i p a s s d e l e t e dancing−chipmunk It will now show up as deleted: $ multipass list Name State IPv4 Release snapcraft−asciinema STOPPED −− Ubuntu Snapcraft builder for Core 18 dancing−chipmunk DELETED −− Not Available And when you want to completely get rid of it: $ m u l t i p a s s purge 96 Integrate into the rest of your virtualization You might have other virtualization already based on libvirt either through using the similar older uvtool already or through the common virt-manager. You might for example want those guests to be on the same bridge to communicate to each other or you need access to the graphical output for some reason. Fortunately it is possible to integrate this by using the libvirt backend of multipass $ sudo m u l t i p a s s s e t l o c a l . d r i v e r=l i b v i r t After that when you start a guest you can also access it via tools like virt-manager or virsh $ m u l t i p a s s l a u n c h ubuntu Launched : engaged−amberjack $ v i r s h l i s t Id Name S t a t e −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− 15 engaged−amberjack r un n i n g Get help m u l t i p a s s h e l p m u l t i p a s s h e l p See the multipass documentation for more details. Qemu Qemu is a machine emulator that can run operating systems and programs for one machine on a different machine. Mostly it is not used as emulator but as virtualizer in collaboration with KVM kernel components. In that case it utilizes the virtualization technology of the hardware to virtualize guests. While qemu has a command line interface and a monitor to interact with running guests those is rarely used that way for other means than development purposes. Libvirt provides an abstraction from specific versions and hypervisors and encapsulates some workarounds and best practices. Running Qemu/KVM While there are much more user friendly and comfortable ways, using the command below is probably the quickest way to see some called Ubuntu moving on screen is directly running it from the netboot iso. Warning: this is just for illustration - not generally recommended without verifying the check- sums; Multipass and UVTool are much better ways to get actual guests easily. Run: sudo qemu−system−x86_64 −enable−kvm −cdrom http://archive.ubuntu.com/ubuntu/dists/ bionic−updates/main/installer−amd64/current/images/netboot/mini.iso You could download the ISO for faster access at runtime and e.g. add a disk to the same by: • creating the disk > qemu−img create −f qcow2 disk.qcow 5G 97 • Using the disk by adding −drive file =disk.qcow,format=qcow2 Those tools can do much more, as you’ll find in their respective (long) man pages. There also is a vast assortment of auxiliary tools to make them more consumable for specific use-cases and needs - for example virt-manager for UI driven use through libvirt. But in general - even the tools eventually use that - it comes down to: qemu-system-x86_64 options image[s] So take a look at the man page of qemu, qemu-img and the documentation of qemu and see which options are the right one for your needs. Graphics Graphics for qemu/kvm always comes in two pieces. • A front end - controlled via the −vga argument - which is provided to the guest. Usually one of cirrus , std, qxl, virtio . The default these days is qxl which strikes a good balance between guest compatibility and performance. The guest needs a driver for what is selected, which is the most common reason to switch from the default to either cirrus (e.g. very old Windows versions) • A back end - controlled via the −display argument - which is what the host uses to actually display the graphical content. That can be an application window via gtk or a vnc. • In addition one can enable the −spice back-end (can be done in addition to vnc) which can be faster and provides more authentication methods than vnc. • if you want no graphical output at all you can save some memory and cpu cycles by setting −nographic If you run with spice or vnc you can use native vnc tools or virtualization focused tools like virt −viewer. More about these in the libvirt section. All those options above are considered basic usage of graphics. There are advanced options for further needs. Those cases usually differ in their ease-of-use and capability are: • Need some 3D acceleration: −vga virtio with a local display having a GL context −display gtk,gl=on; That will use virgil3d on the host and needs guest drivers for [virt3d] which are common in Linux since Kernels >=4.4 but hard to get by for other cases. While not as fast as the next two options, the big benefit is that it can be used without additional hardware and without a proper IOMMU setup for device passthrough. • Need native performance: use PCI passthrough of additional GPUs in the system. You’ll need an IOMMU setup and unbind the cards from the host before you can pass it through like −device vfio−pci,host=05:00.0,bus=1,addr=00.0,multifunction=on,x−vga=on −device vfio−pci,host =05:00.1,bus=1,addr=00.1 • Need native performance, but multiple guests per card: Like PCI Passthrough, but using mediated devices to shard a card on the Host into multiple devices and pass those like −display gtk,gl =on −device vfio−pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/4dd511f6−ec08−11e8−b839−2 f163ddee3b3,display=on,rombar=0. More at kraxel on vgpu and Ubuntu GPU mdev evaluation. The sharding of the cards is driver specific and therefore will differ per manufacturer like Intel or Nvidia. Especially the advanced cases can get pretty complex, therefore it is recommended to use qemu through libvirt for those cases. Libvirt will take care of all but the host kernel/bios tasks of such configurations. Upgrading the machine type If you are unsure what this is, you might consider this as buying (virtual) Hardware of the same spec but a newer release date. You are encouraged in general and might want to update your machine type of an existing defined guests in particular to: • to pick up latest security fixes and features 98 • continue using a guest created on a now unsupported release In general it is recommended to update machine types when upgrading qemu/kvm to a new major version. But this can likely never be an automated task as this change is guest visible. The guest devices might change in appearance, new features will be announced to the guest and so on. Linux is usually very good at tolerating such changes, but it depends so much on the setup and workload of the guest that this has to be evaluated by the owner/admin of the system. Other operating systems where known to often have severe impacts by changing the hardware. Consider a machine type change similar to replacing all devices and firmware of a physical machine to the latest revision - all considerations that apply there apply to evaluating a machine type upgrade as well. As usual with major configuration changes it is wise to back up your guest definition and disk state to be able to do a rollback just in case. There is no integrated single command to update the machine type via virsh or similar tools. It is a normal part of your machine definition. And therefore updated the same way as most others. First shutdown your machine and wait until it has reached that state. v i r s h shutdown # w a i t v i r s h l i s t −−i n a c t i v e # s h o u l d now l i s t your machine a s ” s h u t o f f ” Then edit the machine definition and find the type in the type tag at the machine attribute. v i r s h e d i t Change this to the value you want. If you need to check what types are available via “-M ?” Note that while providing upstream types as convenience only Ubuntu types are supported. There you can also see what the current default would be. In general it is strongly recommended that you change to newer types if possible to exploit newer features, but also to benefit of bugfixes that only apply to the newer device virtualization. kvm −M ? # l i s t s machine types , e . g . pc−i 4 4 0 f x −x e n i a l Ubuntu 1 6 . 0 4 PC ( i440FX + PIIX , 1 9 9 6 ) ( d e f a u l t ) . . . pc−i 4 4 0 f x −b i o n i c Ubuntu 1 8 . 0 4 PC ( i440FX + PIIX , 1 9 9 6 ) ( d e f a u l t ) . . . After this you can start your guest again. You can check the current machine type from guest and host depending on your needs. v i r s h s t a r t # check from host , v i a dumping t h e a c t i v e xml d e f i n i t i o n v i r s h dumpxml @machine ) ” − # o r from t h e g u e s t v i a dmidecode ( i f s u p p o r t e d ) sudo dmidecode | g r e p Product −A 1 Product Name : Standard PC ( i440FX + PIIX , 1 9 9 6 ) V e r s i o n : pc−i 4 4 0 f x −b i o n i c If you keep non-live definitions around - like xml files - remember to update those as well. Note This also is documented along some more constraints and considerations at the Ubuntu Wiki 99 QEMU usage for microvms QEMU became another use case being used in a container-like style providing an enhanced isolation compared to containers but being focused on initialization speed. To achieve that several components have been added: - the microvm machine type - alternative simple FW that can boot linux called qboot - qemu build with reduced features matching these use cases called qemu−system−x86−microvm For example if you happen to already have a stripped down workload that has all it would execute in an initrd you would run it maybe like the following: $ sudo qemu−system−x86_64 −M ubuntu−q35 −cpu host −m 1024 −enable−kvm −serial mon:stdio − nographic −display curses −append ’console=ttyS0,115200,8n1’ −kernel vmlinuz−5.4.0−21 −initrd /boot/ initrd.img−5.4.0−21−workload To run the same with microvm, qboot and the minimized qemu you would do the following 1. run it with with type microvm, so change -M to −M microvm 2. use the qboot bios, add −bios /usr/share/qemu/bios−microvm.bin 3. install the feature-minimized qemu-system package, do $ sudo apt install qemu−system−x86− microvm An invocation will now look like: $ sudo qemu-system-x86_64 -M microvm -bios /usr/share/qemu/bios-microvm.bin -cpu host -m 1024 -enable- kvm -serial mon:stdio -nographic -display curses -append ‘console=ttyS0,115200,8n1’ -kernel vmlinuz-5.4.0-21 -initrd /boot/initrd.img-5.4.0-21-workload That will have cut down the qemu, bios and virtual-hw initialization time down a lot. You will now - more than you already have before - spend the majority inside the guest which implies that further tuning probably has to go into that kernel and userspace initialization time. ** Note ** For now microvm, the qboot bios and other components of this are rather new upstream and not as verified as many other parts of the virtualization stack. Therefore none of the above is the default. Further being the default would also mean many upgraders would regress finding a qemu that doesn’t have most features they are used to use. Due to that the qemu-system-x86-microvm package is intentionally a strong opt-in conflicting with the normal qemu-system-x86 package. libvirt The libvirt library is used to interface with different virtualization technologies. Before getting started with libvirt it is best to make sure your hardware supports the necessary virtualization extensions for KVM. Enter the following from a terminal prompt: kvm−ok A message will be printed informing you if your CPU does or does not support hardware virtualization. Note On many computers with processors supporting hardware assisted virtualization, it is necessary to activate an option in the BIOS to enable it. 100 Virtual Networking There are a few different ways to allow a virtual machine access to the external network. The default virtual network configuration includes bridging and iptables rules implementing usermode networking, which uses the SLIRP protocol. Traffic is NATed through the host interface to the outside network. To enable external hosts to directly access services on virtual machines a different type of bridge than the default needs to be configured. This allows the virtual interfaces to connect to the outside network through the physical interface, making them appear as normal hosts to the rest of the network. There is a great example how to configure an own bridge and combining it with libvirt so that guests will use it at the netplan.io. Installation To install the necessary packages, from a terminal prompt enter: sudo apt update sudo apt i n s t a l l qemu−kvm l i b v i r t −daemon−system After installing libvirt-daemon-system, the user used to manage virtual machines will need to be added to the libvirt group. This is done automatically for members of the sudo group, but needs to be done in additon for anyone else that should access system wide libvirt resources. Doing so will grant the user access to the advanced networking options. In a terminal enter: sudo a d d u se r $USER l i b v i r t Note If the user chosen is the current user, you will need to log out and back in for the new group membership to take effect. You are now ready to install a Guest operating system. Installing a virtual machine follows the same process as installing the operating system directly on the hardware. You either need: - a way to automate the installation - a keyboard and monitor will need to be attached to the physical machine. - use cloud images which are meant to self-initialize (see Multipass and UVTool) In the case of virtual machines a Graphical User Interface (GUI) is analogous to using a physical keyboard and mouse on a real computer. Instead of installing a GUI the virt-viewer or virt-manager application can be used to connect to a virtual machine’s console using VNC. See Virtual Machine Manager / Viewer for more information. Virtual Machine Management The following section covers the command-line tools around virsh that are part of libvirt itself. But there are various options at different levels of complexities and feature-sets, like: • multipass • uvt • virt-* tools • openstack 101 virsh There are several utilities available to manage virtual machines and libvirt. The virsh utility can be used from the command line. Some examples: • To list running virtual machines: v i r s h l i s t • To start a virtual machine: v i r s h s t a r t • Similarly, to start a virtual machine at boot: v i r s h a u t o s t a r t • Reboot a virtual machine with: v i r s h r e b o o t • The state of virtual machines can be saved to a file in order to be restored later. The following will save the virtual machine state into a file named according to the date: v i r s h s a v e Once saved the virtual machine will no longer be running. • A saved virtual machine can be restored using: v i r s h r e s t o r e save−my . s t a t e • To shutdown a virtual machine do: v i r s h shutdown • A CDROM device can be mounted in a virtual machine by entering: v i r s h a t t a c h −d i s k • To change the definition of a guest virsh exposes the domain via v i r s h e d i t That will allow one to edit the XML representation that defines the guest and when saving it will apply format and integrity checks on these definitions. Editing the XML directly certainly is the most powerful way, but also the most complex one. Tools like Virtual Machine Manager / Viewer can help unexperienced users to do most of the common tasks. If virsh (or other vir* tools) shall connect to something else than the default qemu-kvm/system hipervisor one can find alternatives for the connect option in man virsh or libvirt doc system and session scope Virsh - as well as most other tools to manage virtualization - can be passed connection strings. $ virsh –connect qemu:///system There are two options for the connection. • qemu:///system - connect locally as root to the daemon supervising QEMU and KVM domains 102 • qemu:///session - connect locally as a normal user to his own set of QEMU and KVM domains The default always was (and still is) qemu:///system as that is the behavior users are used to. But there are a few benefits (and drawbacks) to qemu:///session to consider it. qemu:///session is per user and can on a multi-user system be used to separate the people. But most importantly things run under the permissions of the user which means no permission struggle on the just donwloaded image in your $HOME or the just attached USB-stick. On the other hand it can’t access system resources that well, which includes network setup that is known to be hard with qemu:///session. It falls back to slirp networking which is functional but slow and makes it impossible to be reached from other systems. qemu:///system is different in that it is run by the global system wide libvirt that can arbitrate resources as needed. But you might need to mv and/or chown files to the right places permssions to have them usable. Applications usually will decide on their primary use-case. Desktop-centric applications often choose qemu :///session while most solutions that involve an administrator anyway continue to default to qemu:///system. Read more about that in the libvirt FAQ and this blog about the topic. Migration There are different types of migration available depending on the versions of libvirt and the hipervisor being used. In general those types are: • offline migration • live migration • postcopy migration There are various options to those methods, but the entry point for all of them is virsh migrate. Read the integrated help for more detail. v i r s h m i g r a t e −−h e l p Some useful documentation on constraints and considerations about live migration can be found at the Ubuntu Wiki Device Passthrough / Hotplug If instead of the here described hotplugging you want to always pass through a device add the xml content of the device to your static guest xml representation via e.g. virsh edit don’t need to use attach/detach. There are different kinds of passthrough. Types available to you depend on your Hardware and software setup. • USB hotplug/passthrough • VF hotplug/Passthrough But both kinds are handled in a very similar way and while there are various way to do it (e.g. also via qemu monitor) driving such a change via libvirt is recommended. That way libvirt can try to manage all sorts of special cases for you and also somewhat masks version differences. In general when driving hotplug via libvirt you create a xml snippet that describes the device just as you would do in a static guest description. A usb device is usually identified by Vendor/Product id’s: Download 1.27 Mb. Do'stlaringiz bilan baham: |
Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling
ma'muriyatiga murojaat qiling