In this article, which serve as a reminder for when I’ll have to reinstall a proxmox based server, I’ll describe how to configure a basic installation of a proxmox based server (mine is hosting by Online.net). As Online provide a server with proxmox already installed, this article doesn’t describe how to install proxmox (you can find some instruction on their wiki) and start with a fresh proxmox installation.
Note: Some sections don’t have explanation, if it’s bothering you please open an issue on GitHub and I’ll try to explain it to you (and update this page accordingly).
Give a linux user administrator rights in proxmox
PAM-based Proxmox administrator
#Create a new user (change user by your username)
pveum useradd user@pam
#Define the group:
pveum groupadd admin -comment "System Administrators"
#Then add the permission:
pveum aclmod / -group admin -role Administrator
#You can finally add users to the new 'admin' group:
pveum usermod user@pam -group admin
On your proxmox host, open the /etc/network/interfaces
file and add at the the end:
/etc/network/interfaces
# Content by your provider
### SNAT & DNAT INTERNET
#########################
auto vmbr10
iface vmbr10 inet static
address 192.168.10.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
To manage the firewall of our server, I have created a simple script that can be placed as an init service (soon to be replaced by a systemd unit). This script create a NAT that limit the packet emited by your VM on the external network and it let you define some rules for your services (if you have only one public IP, if not this script is of little interest for you).
/etc/init.d/firewall
##!/bin/sh
### BEGIN INIT INFO
# Provides: firewall
# Required-Start: mountkernfs ifupdown $local_fs
# X-Start-Before: networking
# Default-Start: 2 3 4 5
# Required-Stop:
# Default-Stop: 0 1 6
# Short-Description: Configure iptables.
# Description: Configure iptables.
### END INIT INFO
IPT=/sbin/iptables
SERVER_IP=195.154.200.123
case "$1" in
start) echo "Starting Aegis Firewall"
# NAT
######
$IPT -t nat -A POSTROUTING -o vmbr0 -s 192.168.10.0/24 ! -d 102.168.10.0/24 \
-j SNAT --to $SERVER_IP -m comment --comment "snat vm to ext"
# Routing vm services
$IPT -t nat -A PREROUTING -i vmbr0 -p tcp --dport 80 \
-j DNAT --to 192.168.10.10:80 -m comment --comment "revproxy tcp/80"
$IPT -t nat -A PREROUTING -i vmbr0 -p tcp --dport 443 \
-j DNAT --to 192.168.10.10:443 -m comment --comment "revproxy tcp/443"
# FireWall
###########
$IPT -A FORWARD -s 192.168.10.0/24 -j ACCEPT
$IPT -A FORWARD -d 192.168.10.0/24 -j ACCEPT
$IPT -A INPUT -i vmbr0 -p tcp --destination-port 8006 ! -s 192.168.10.0/24 \
-j DROP -m comment --comment "block proxmox gui except revproxy"
$IPT -A INPUT -m state --state NEW -m tcp -p tcp \
-m multiport --dports 5901:5903,6001:6003,17523 -j ACCEPT
;;
stop) echo "Stopping Firewall"
$IPT -t nat -F
$IPT -F
;;
*) echo "Usage: /etc/init.d/firewall {start|stop}"
exit 2
;;
esac
exit 0
When installing this script as a service don’t forget to make it executable and to register it as a startup service:
firewall service on startup
chmod +x /etc/init.d/firewall
update-rc.d firewall defaults
Before anything, download the Debian template: in the local disk section, tab content, click on Templates and select the Debian 7 base image.
Then create a container with that image and the following information:
Template container configuration
General: choose whatever you want, just remember it ;)
Template: the debian that you just download
Resources: the default values are enough for a template
Network: choose the bridged mode with the newly created vmbr10
DNS: use host settings for now
Once the container is ready start it and log into:
Start and enter container 100
vzctl start 100
vzctl enter 100
Edit the network configuration to let your CT access the internet.
/etc/network/interfaces
auto eth0
iface eth0 inet static
address 192.168.10.5
netmask 255.255.255.0
gateway 192.168.10.1
Install common packages (edit this list if you want more or less packages)
Install common packages
apt-get update; apt-get upgrade -y
apt-get install -y htop iotop tree zsh ca-certificates sudo
Add default user and make it a sudoer
Default sudoer user
adduser username
usermod -aG sudo username
This step is optionnal, if you don’t want to use puppet (which will be configured later) you can safely skip it.
Prepare puppet agent
wget https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb
dpkg -i puppetlabs-release-wheezy.deb
apt-get update && apt-get install puppet
Now that we have a container containing all of our base tools, we need to create the base image for our next container. To do that, we will begin by stopping the container and removing the network attached (eth0). Then we will create a tar archive of the container file system and place that archive at the right place and we will be done.
Create an openVZ template
# These commands need to be run as root on the hypervisor
vzctl stop 100
vzctl set 100 --save --netif_del eth0
cd /var/lib/vz/private/100
tar -cvzpf /var/lib/vz/template/cache/debian-7.0-improved_amd64.tar.gz .
That’s it, you have a new template !
A great article on how to create your own SSl certificates is available online at https://help.ubuntu.com/14.04/serverguide/certificates-and-security.html. Consequently this article will only list all commands to run to have a self-signed certificate and WILL NOT explain some concept or tradeoff made.
Prepare root authority
# Make directories where certificates will be stored
mkdir /etc/ssl/CA
mkdir /etc/ssl/newcerts
# Create certificates serial and index for the Certificate Authority
echo '01' > /etc/ssl/CA/serial
touch /etc/ssl/CA/index.txt
Edit the file /etc/ssl/openssl.cnf
and in the [ CA_default ]
section add/or modify
/etc/ssl/openssl.cnf
dir = /etc/ssl # Where everything is kept
database = $dir/CA/index.txt # database index file.
certificate = $dir/certs/cacert.pem # The CA certificate
serial = $dir/CA/serial # The current serial number
private_key = $dir/private/cakey.pem # The private key
Create the root certificate
# Next we create the self-signed certificate:
openssl req -new -x509 -sha256 -extensions v3_ca -keyout cakey.pem -out cacert.pem -days 730
# Install root certificate and private key
mv cakey.pem /etc/ssl/private/
mv cacert.pem /etc/ssl/certs/
You can now sign your own certificates.
This procedure must be done for each new certificate you want to have.
The first time you can edit the openssl.cnf
file and edit the default value that will be asked when creating a CSR. That can done in the section [ req_distinguished_name ]
.
Generate a CSR
# You should enter a passphrase (at least 4 characters), you can create
# a key without a passphrase later if your service need one.
openssl genrsa -des3 -out server.key 2048
# Generate the key without passphrase
openssl rsa -in server.key -out server.key.insecure
mv server.key server.key.secure
mv server.key.insecure server.key
# Generate the CSR
openssl req -new -key server.key -out server.csr
Sign the CSR by our CA
# The pass asked is the CA one
openssl ca -in server.csr -config /etc/ssl/openssl.cnf
# Set $NAME with the name of your cert (e.g. francois.monniot.eu.crt) and $NUM with the name of the created certificate
export NAME=tmp NUM=01
nawk 'v{v=v"\n"$0}!/^#/ && /----BEGIN/ {v=$0}/----END/&&v{ print v > "'$NAME'.crt" close("'$NAME'.crt")}' /etc/ssl/newcerts/$NUM.pem
mv $NAME.crt-1 $NAME.crt
Congratulation, you have a self-signed SSL certificate that you can deploy wherever you want !
And because it can be a bit tedious, I have made a simple script that do all these operation in one line: sh create_insecure.sh $CRTNAME
create_insecure.sh
SSLDIR=/etc/ssl/
if [ -z "$1" ]; then
KEYNAME=server
else
KEYNAME=$1
fi;
echo Creating insecure certificate $KEYNAME
mkdir $SSLDIR$KEYNAME
cd $SSLDIR$KEYNAME
echo Please provide a passphrase of at least 4 characters
openssl genrsa -des3 -out $KEYNAME.key 2048
# Generate the key without passphrase
openssl rsa -in $KEYNAME.key -out $KEYNAME.key.insecure
mv $KEYNAME.key $KEYNAME.key.secure
mv $KEYNAME.key.insecure $KEYNAME.key
# Generate the CSR
openssl req -new -key $KEYNAME.key -out $KEYNAME.csr
# Sign the certificate
echo The pass asked is the CA one
openssl ca -in $KEYNAME.csr -config $SSLDIR"openssl.cnf"
LASTNEWCERTS=$(ls -t $SSLDIR"newcerts" | head -1)
echo Extract crt from $LASTNEWCERTS
nawk 'v{v=v"\n"$0}!/^#/ && /----BEGIN/ {v=$0}/----END/&&v{ print v > "'$KEYNAME'.crt" close("'$KEYNAME'.crt")}' $SSLDIR"newcerts/"$LASTNEWCERTS
mv $KEYNAME.crt-1 $KEYNAME.crt
Our reverse proxy is in a container based on the custom debian 7 template in bridged mode (vmbr10). This container is configured to have a static ip of 192.168.10.10 (on eth0).
As we have only one IP address for our server, we need to NAT our VMs/CTs. To do that you can use the following IPTables rules (if you have used the script of the section Firewall you already have them).
Setting iptables rules
# Already active from /etc/init.d/firewall
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 80 -j DNAT --to 192.168.10.10:80 -m comment --comment "revproxy tcp/80"
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 443 -j DNAT --to 192.168.10.10:443 -m comment --comment "revproxy tcp/443"
Also, we will configure nginx to use https so you need to generate a certificate revproxy.crt
and revproxy.key
without passphrase (here’s how) and copy them into the newly created container. Remember that the FQDN you enter will determine on which URL the certificate will be valide.
Copy certificate on revproxy
scp revproxy.crt 192.168.10.10:/etc/ssl/revproxy.crt
scp revproxy.key 192.168.10.10:/etc/ssl/revproxy.key
/usr/share/doc/nginx-doc/examples/
Install nginx
apt-get install nginx
We force all website to use HTTPS.
/etc/nginx/conf.d/force_https.conf
server {
listen 80 default_server;
rewrite ^(.*) https://$host$1 permanent;
}
Before anything, an edit of the default page to use HTTPS instead of plain HTTP. We will also use our own index rather than the one provided by nginx.
/etc/nginx/sites-available/default
server {
listen 443;
server_name ashelia.me;
root /var/www
index index.html index.htm;
ssl on;
ssl_certificate /etc/ssl/revproxy.crt;
ssl_certificate_key /etc/ssl/revproxy.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
ssl_prefer_server_ciphers on;
location / {
try_files $uri $uri/ /index.html;
}
}
Copy index and reload nginx
mkdir /var/www
cp /usr/share/nginx/www/index.html /var/www/index.html
service nginx reload
At the same time we can also redirect the Proxmox GUI via our proxy to have a beautiful URL (here proxmox.ashelia.me). Remember to generate a new certificate with the correct URL for this to work and store them as /etc/ssl/proxmox.(crt|key)
/etc/nginx/sites-available/proxmox-gui
upstream proxmox {
server 192.168.10.1:8006;
}
server {
listen 443;
server_name proxmox.ashelia.me;
ssl on;
ssl_certificate /etc/ssl/proxmox.crt;
ssl_certificate_key /etc/ssl/proxmox.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
ssl_prefer_server_ciphers on;
proxy_redirect off;
location / {
# Also proxy websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
# Proxy HTTPS
proxy_pass https://proxmox;
}
}
Enable the proxmox integration
ln -s /etc/nginx/sites-available/proxmox-gui /etc/nginx/sites-enabled/
service nginx reload
And that’s it ! You know how to configure nginx to serve as a proxy.
Adapted from http://documentation.fusiondirectory.org/en/documentation/admin_installation.
The LDAP directory is in a container based on the custom debian 7 template in bridged mode (vmbr10). This container is configured to have a static ip of 192.168.10.15 (on eth0).
All the work will be inside the container except for the reverse proxy that need to be configured to proxy the web interface, this part is supposed already done and can be easily be deduce from the proxmox configuration as it’s the same configuration (minus the upstream location and the location
part where you need to insert rewrite ^(.*)$ /fusiondirectory/$1 break;
before the prox_pass
instruction).
The LDAP structure can be representing by the following figure:
LDAP structure
dc=monniot,dc=eu
│
│— ou=groups
│ │— cn=admin
│ │— cn=users
│
│— ou=config
│ │— cn=mail
│
│— ou=people
│— cn=francois
│— cn=admin
Each entity have a different meaning:
groups
contains all groups of our LDAP.config
contains configurations of services.people
contains the user of the system.LDAP installation and configuration
# You will be asked a LDAP password
apt-get install slapd ldap-utils
dpkg-reconfigure slapd
OpenLDAP reconfiguration (more specific options) will ask some questions:
You can know check the LDAP server status, it should be running.
Check LDAP status
/etc/init.d/slapd status
And because managing a LDAP server manually (i.e. without GUI) is a bit tedious, we will install fusiondirectory which is a web interface that provide many basic operation (e.g. managing user and mail) and is extensible via plugin if needed.
We install some debian repositories to simplify the install process.
/etc/apt/sources.list.d/fusiondirectory.list
# fusiondirectory repository
deb http://repos.fusiondirectory.org/debian wheezy main
# fusiondirectory debian-extra repository
deb http://repos.fusiondirectory.org/debian-extra wheezy main
And we register the GPG key of these repositories
Install GPG key
# Import key
gpg --recv-key E184859262B4981F --keyserver keyserver.ubuntu.com
gpg --export -a "Fusiondirectory Archive Manager <[email protected]>" > FD-archive-key
apt-key add FD-archive-key
apt-get update
# Check packages
apt-cache search fusiondirectory | more
First, we install some LDAP schema needed by fusiondirectory.
Install base LDAP schema
apt-get install fusiondirectory-schema schema2ldif
# Install schema
fusiondirectory-insert-schema
Then we check if the schema are present in the LDAP: fusiondirectory-insert-schema -l
must output
core
cosine
nis
inetorgperson
samba
core-fd
core-fd-conf
ldapns
recovery-fd
Install fusiondirectory
apt-get install fusiondirectory
Configure through the web interface by following the given instuction.
Some tips that can save you time:
slappasswd
command and in the file /etc/ldap/slapd.d/cn=config/olcDatabase={1}hdb.ldif
edit the line olcRootPW: <result_of_slappasswd>
with the newly created password./etc/apache2/conf.d/fusiondirectory.conf
add an alias for javascript, the beginning of the file should be something like:Fix js links
# Include FusionDirectory to your web service
Alias /fusiondirectory/javascript /usr/share/javascript
Alias /fusiondirectory /usr/share/fusiondirectory/html
In the Datacenter category, go to the authentication tab and add a LDAP server with the following configuration:
uid
or mail
).Others options can be let on their default values
You need to create a user in proxmox (tab user) even with LDAP. So create your own user with realm LDAP (and group admin if you want to retain the admin rights). Now you can log off and log in with your newly created user.
Voilà, proxmox use your LDAP !
We will use Gitlab to manage our git server. Instruction are available on their official website.
When you have installed Gitlab, create a revproxy entry for it (like the proxmox one).
In the gitlab.yml config file set the port to 443, https to true and the host as the one where Gitlab will be accessible (it will be the address that Gitlab will use to display the git URL).
Some modification need to be made to various nginx configs, for the revproxy add some proxy headers in the location section:
gitlab revproxy location headers
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
And on the Gitlab container, add the line proxy_set_header X-Forwarded-Ssl on;
to the other proxy headers in the nginx configuration.
Tips: once connected as root, don’t forget to disable user signup. To do that, go to administration section, settings panel and uncheck signup enabled.
In gitlab.yml
, verify that ldap.enabled
is set to true
and its parameters (host
, port
, uid
and other bind_dn
) are correct.
If you have used fusiondirectory, you have to manually add an email address to your LDAP profile. Use ldapmodify
as described below.
The first line is the command typed in your shell (in the ldap container), you’ll be then asked the password of your admin user (the one used in the LDAP debian installer). And after you have to type the four lines and validate with two carriage return (double press enter). All of these commands will not given you any indication and that’s perfectly normal.
ldap add mail attribute
ldapmodify -H ldap://localhost -D cn=admin,<your base DN> -x -W
dn: uid=<your user uid>,ou=people,<your base DN>
changetype: modify
add: mail
mail: <your user email>
You can verify if the mail attribute have been created with the command:
ldap search mail attribute
ldapsearch -D cn=admin,<your base DN> -W -b "ou=people,<your base DN>" mail
As we have a proxy between our host and gitlab instance, we also need to NAT a port from the host (here 2222) to the 22 port of the gitlab machine. A simple iptables could be the following:
iptables -t nat -I PREROUTING -i vmbr0 -p tcp --dport 2222 -j DNAT --to 192.168.10.25:22 -m comment --comment "gitlab ssh"
Don’t forget to edit gitlab.yml
and tell gitlab that the ssh port is 2222 (key gitlab_shell.port
)
In this section I’m not going to describe how to install a DNS system, instead I’ll let you read two excellent articles written by Jack Brennan on his blog: How To: DNS with BIND9 on Debian – Part 1/2 and Part 2/2.
Once your DNS is in place, don’t forget to change all your previous machine to use it (edit /etc/resolv.conf
and add the IP of your DNS and comment all others). It can be a good idea to update your base template with the DNS already configured.
Bonus: Update your DNS on a Gitlab push.
If you have your DNS configuration under a versioning system (like git), it could be interesting to update automaticaly your zone DNS. That way you just have to push a modification and hop, you DNS are already updated!
To do that, we need to run three commands when a push occur: git pull
, named-checkconf
, bind9 restart
. To that effect I have developped a small tool (written in Go and available on Github) that spin up a web server and execute some preconfigured action when its API is called.
To install it:
# Install the software (choose your software architecture)
wget https://github.com/fmonniot/webhook-listener/releases/download/v1.0/webhook-listener-linux64 -O /usr/local/bin/webhook-listener
chmod u+x /usr/local/bin/webhook-listener
# As a service
wget https://raw.githubusercontent.com/fmonniot/webhook-listener/master/support/webhook-listener -O /etc/init.d/webhook-listener
chmod u+x /etc/init.d/webhook-listener
# Configuration (more details at https://github.com/fmonniot/webhook-listener)
wget https://raw.githubusercontent.com/fmonniot/webhook-listener/master/config.json -O /etc/webhook-listener.json
Don’t forget to edit the TLS section if you want a secure server and to change the API key for your endpoints.
In Gitlab, in your DNS project add the webhook URI: https://<webhook-server>:8080/<your/endpoint>?apiKey=your_api_key
And tada! Vos DNS sont maintenant automatiquement mis à jours.