vCAV 3.0 Endpoint mappings.

Playing with vCAV and setting it up for both a lab and production environment has become considerably easier with vCAV 3.0 which is due for GA very soon. Due to the increased functionality in the solution and the ability to do a single node deployment for lab or dev environments, understanding the endpoint mappings is important. Below is a detailed look at all things networking for vCAV 3.0. (Copied from work done by  Andrey Petrov)

TLDR:

  1. Use port 443 on each appliance to do day 0 or day 2 config unless instructed otherwise.
  2. In dev environments and/or combined appliances, use:
    1. 8443 for the c4 admin UI
    2. 8442 for the tunnel admin UI
    3. 8441 for the manager admin UI
    4. 8440 for the replicator admin UI
  3. When adding local replicators, log into the manager htpps://mgr-ip:8441 and use local-rtr-ip:8043 (we almost managed to move this step in the c4 ui in 3.0, obviating the need to ever go to the manager during initial install & config, but that work was postponed and will be completed in a future release)
  4. When pairing with c4 use a hostname/literal-ip and port that lead to the tunnel data endpoint for the respective site. If you don’t have a port forwarder, or haproxy, etc (which is typically the case in dev envs), use the tunnel data endpoint directly: https://tunnel-ip:8048. Note: the public api endpoint configuration option in c4 should be explicitly set to haproxy:443 or tunnel:8048 (depending on your setup).

Services & Deployment roles

The vCAV stack consists of the following services:

  1. hbrsrv – low level replication data component, used by a replication at the destination site only.
  2. lwdproxy – adds encryption, authentication & authorization to hbrsrv, used by a replication at both the source and destination site.
  3. replicator – low level vCenter replication management component
  4. h4 manager – vCenter replication management component
  5. c4 – talks to vCloud Director & an h4 manager to manage and monitor vCD VM/vApp replications.
  6. tunnel – manages TLS tunnels across sites by running an embedded tlsnexus engine.

Deployment roles

Role
Services
onprem tunnel, replicator, lwdproxy, hbrsrv
management c4, manager
replicator replicator, lwdproxy, hbrsrv
tunnel tunnel
combined tunnel, c4, manager, replicator, lwdproxy, hbrsrv

Endpoints

Port
Name
Deployment Roles
Description
Usage
443 HTTPS onprem Iptables redirect to port 8440 (onprem replicator admin UI) On-prem admins for installation, configuration, day2 ops of on-prem replicator
443 HTTPS combined Iptables redirect to port 8443 (c4 public api endpoint & portal) Provider admins for installation, configuration, day2 ops of c4
443 HTTPS management Iptables redirect to port 8443 (c4 public api endpoint & portal) Provider admins for installation, configuration, day2 ops of c4
443 HTTPS replicator Iptables redirect to port 8440 (cloud replicator admin UI) Provider admins for installation, configuration, day2 ops of cloud replicator
443 HTTPS tunnel Iptables redirect to port 8442 (tunnel admin UI) Provider admins for installation, configuration, day2 ops of cloud tunnel
8043 replicator API onprem, replicator, combined Replicator API port Used by provider admin when adding local replicators. Also used internally by the h4 manager.
8044 manager API management, combined H4 Manager API port Used internally by c4 and the replicators. Should never be accessed by a human operator and/or public API client for any purpose.
8045 lwdproxy API onprem, replicator, combined lwdproxy API port Used internally by replicators to configure lwdproxy routes. Should never be accessed by a human operator and/or public API client for any purpose.
8046 c4 API management, combined C4 API port Used internally for cross-site communication between c4 instances and cross-site communication between on-prem rtr/virgo and cloud c4. Should never be accessed by a human operator and/or public API client for any purpose.
8047 Tunnel API onprem, tunnel, combined Tunnel service API port Internally by c4 (cloud side) and replicator (on-prem) for configuring the tunneling routes. Should never be accessed by a human operator and/or public API client for any purpose.
8048 Tunnel data tunnel, combined This is the single entry point for all ingress traffic including the UI, the public vCAV API, internal management traffic, replication data, etc. Typically, the provider will forward some-public-ip:443 to this endpoint, i.e. this endpoint is visible on the internet / remote sites. In production, the provider will use dnat/haproxy/nsx/etc to forward port public-ip:443 to the tunnel data endpoint. Then they explicitly set the c4 public api endpoint configuration option to public-ip:443

In devel/eval use cases, there might not be a haproxy/nsx/etc to forward the traffic, and then the tunnel-addr:8048 can be set directly as the c4 public api endpoint configuration option. For this to work other c4 sites and on-prem sites must have direct connectivity to tunnel-addr:8048 (this is usually the case in test/eval environments).

In any case when pairing with this cloud the on-prem admin needs to use the value that was configured in the c4 public api endpoint configuration option.

8123 hbrsrv API replicator, onprem, combined hbrsrv internal VMODL management API The replicator uses this to configure hbrsrv for accepting incoming replications. Access to this port from outside of the vcav appliance should be disallowed by the appliance firewall. Should never be accessed by a human operator and/or public API client for any purpose.
8440 Replicator admin UI replicator, combined, onprem Should only be used by provider admins for replicator day2 operations, when the combined deployment roles is in use.
8441 Manager admin UI combined, management Used by provider admins for manager install/config/day2 operations.

(In a future release we will eliminate the necessity to use this endpoint during initial setup.)

8442 Tunnel admin UI combined, onprem, tunnel Used by admins for tunnel day2 operations, when the combined deployment roles is in use.
8443 c4 public api combined,

management

Should never be used by human / api users.
31031 hbrsrv onprem, replicator, combined Should never be used by human / api users.
44045 lwdproxy onprem, replicator, combined Should never be used by human / api users.
44046 lwdproxy plain onprem, replicator, combined Should never be used by human / api users.

FAQ

  • Why do most services (tunnel, c4, manager, replicator) have two endpoints, e.g. c4 has 8443 and 8046?
    – Because client certificates are used internally in the system for authentication across sites and therefore we need a TLS point that allows client certificates. However, we also need to serve the admin UIs (in the c4 case the admin UI and the tenant UI are the same thing). We cannot just serve the UIs and the internal API on the same endpoint for the following reason. Browsers will detect if a server endpoint allows client certificates and will prompt the user to select a client certificate to use for the connection, if any client certificates are installed in the browser. This is at best annoying (the user needs to ‘cancel’ the client cert prompt in the browser) or will result in notAuthenticated errors (if the user attempts to use their e-banking certificate with the vCAV services), and at worst it renders the vCD ui plugin totally unusable (unlike when visiting a site, the browser handles ajax requests differently – it rejects outright all ajax requests to a server endpoints that allow client certs, without any warning or explanation!). Therefore, we use 80XX for internal communication between components, and 844X to serve our various UIs and other API clients.

SSL : Part 5.1 : CA Signed Certificates for vCloud Director with a single IP

Update: If you looking for a post on CA signed certificates for vCloud Director 10, go here.

This is a tweaked version of Part 5 and produces certificates for a vCD instance running on a single IP address as opposed to the usual 2, one for http access and one for the console proxy. Tomas Fojta explains the configuration for a single IP in this blog. It must be noted that even though vCD is using a single IP, there still needs to be two certificates in the keystore during configuration.

Step 1. First we are going to produce an unsigned certificate and place it in a new certificate store. (It will be replaced with a signed cert later but we need the keys for the signing) Open a SSH session to your vCD instance and change to the /tmp directory. Execute the two scripts below to create the certificate. It is placed in a certificate store called certificates.ks. You can ignore the warning about the JCEKS keystore using a propriety format. You should then have a file called certificates.ks in the /tmp directory.


/opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks \
-alias http \
-storepass ChangeMe \
-keypass ChangeMe \
-storetype JCEKS \
-genkeypair \
-keyalg RSA \
-keysize 2048 \
-validity 3650 \
-dname "CN=vcd-dc1-003.local, OU=Sales, O=VMware, L=Pittsford, S=New York, C=US" \
-ext "san=dns:vcd-dc1-003.local,dns:vcd-dc1-003,ip:192.168.20.83"

Step 2. Next we going to produce a certificate signing request (CSR) that we will send on to our Certificate Authority (CA). Once you execute the script below you should end up with a new file called vcd-dc1-003.csr.


/opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks \
-storetype JCEKS \
-storepass ChangeMe \
-certreq \
-alias http \
-file vcd-dc1-003.csr \
-ext "san=dns:vcd-dc1-003.local,dns:vcd-dc1-003,ip:192.168.20.83"

Step 3. From here we need to get the CSR signed by our lab CA. See Part 2 of this series to find out how. The output of that process will be a .cer file that contain the signed certificate. Make sure you also get a copy of the CA’s root certificate as you will need it in Step 4. See Part 1 of this series to get the CA Root certificate if you don’t already have it.

Step 4. We are now going to take the two .cer files collected in step 3 and load them into the certificate store file certificates.ks. You will need to use a tool like WinSCP to transfer the files to your vCD server. Once you have them there, execute the next two scripts to get the files into the store. Again, note that you must use the same alias names as in the script.


/opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks \
-storetype JCEKS \
-storepass ChangeMe \
-import \
-alias root \
-file LabCA.cer


/opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks \
-storetype JCEKS \
-storepass ChangeMe \
-import \
-alias http \
-file vcd-dc1-003.cer

Once this is done, list the contents of the certificate store to make sure that the root and http certificate are in there.
/opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS -storepass ChangeMe -keystore certificates.ks -list001.png

Step 5. As I mentioned above, the configurator for vCD expects two certificates in the store even if you are using a single IP. Since we want to use the same certificate for both, we care going to copy the http certificate into the consoleproxy certificate. Run the 3 scripts below to create a new file called certs.ks that will contain the three required certificates.


/opt/vmware/vcloud-director/jre/bin/keytool \
-importkeystore \
-srckeystore certificates.ks \
-srcstoretype JCEKS \
-srcstorepass ChangeMe \
-srckeypass ChangeMe \
-srcalias http \
-destkeystore certs.ks \
-deststoretype JCEKS \
-deststorepass ChangeMe \
-destkeypass ChangeMe \
-destalias http


/opt/vmware/vcloud-director/jre/bin/keytool \
-importkeystore \
-srckeystore certificates.ks \
-srcstoretype JCEKS \
-srcstorepass ChangeMe \
-srckeypass ChangeMe \
-srcalias http \
-destkeystore certs.ks \
-deststoretype JCEKS \
-deststorepass ChangeMe \
-destkeypass ChangeMe \
-destalias consoleproxy


/opt/vmware/vcloud-director/jre/bin/keytool \
-importkeystore \
-srckeystore certificates.ks \
-srcstoretype JCEKS \
-srcstorepass ChangeMe \
-srcalias root \
-destkeystore certs.ks \
-deststoretype JCEKS \
-deststorepass ChangeMe \
-destalias root

And once again, list the contents of the certs.ks file to ensure you have three certificates and the thumbprint for the http and consoleproxy are the same.
/opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS -storepass ChangeMe -keystore certs.ks -list002.png

Step 6. We are now ready to reconfigure (in my case) vCD. In order to get the new certificate store into vCD, we need to run the configure script in unattended mode. The script I use is as follows but you may need to change it depending on your lab setup. I have installed Postgres on my vCD server so you may have to adjust if you using Oracle or MSSQL. (If you are, you should consider switching to Postgres as support for Oracle is already depreciated in 9.1 and MSSQL will be depreciated in the next release! This is good news as VMware moves to making vCD available as an appliance.)

Stop vCD:

service vmware-vcd stop

Update the configuration:

/opt/vmware/vcloud-director/bin/configure \
-cons 192.168.20.83 \
--console-proxy-port-https 8443 \
-ip 192.168.20.83 \
--primary-port-http 80 \
--primary-port-https 443 \
-dbhost vcd-dc1-003.local -dbport 5432 -dbtype postgres -dbname vcloud -dbuser vcloud -dbpassword 'vcloudpass' \
-k /tmp/certs.ks \
-w 'ChangeMe' \
-loghost vrli-dc1.local \
-logport 514 \
-g \
--enable-ceip true \
-unattended

Start vCD:

service vmware-vcd start

And that’s it. Give vCD a minute or two to get going and then open a new browser session to your instance. It should be nice and secure now.

003.png004.png

What is the vROps Tenant App for vCD …… and how to get it up and running.

This is my first VMware blog and is going to be all about the recently released vRealize Operations Manager Tenant App for vCloud Director. This is a great way to expose vROps performance metrics to tenants in a vCD environment and allow them to only see metrics relevant to their organization. vCD 9.x has been a revolution in the way vCD is evolving and before you say vCD is dead, go to this great blog by Daniel Paluszek and read all about how it is not! This is an awesome way for providers to give their tenants more visibility into the performance of their environments and will help the tenants to do more of the first line maintenance themselves. They will also feel comfortable being able to see first hand what is and isn’t performing in their cloud.

So onto the setup of the Tenant App. There are three main parts to getting this done, installing an AMQP broker, installing the App and configuring vCD and vROps.

Before we get into the nitty gritty, we need to download the Management Pack and the Appliance. You can find both of them here. You get the management pack by clicking the green ‘Try’ button to the right of the screen, and the appliance .ova by clicking ‘Appliance: vRealize Operations Tenant App for vCloud Director’ in the list of resources.

 

Installing an AMQP Broker

For this part I wanted to use something simple and pretty much out the box and bitnami have a great pre-built VM in .ova format that is just that. It is an install of the tried and trusted RabbitMQ and you can go here to get it. Deploy the .ova and start it up. Once it is done with it’s setup during boot, you will get a screen something like this in the console:

pics_006.png

Log into the console using ‘bitnami’ and ‘bitnami’. You will be required to set a new password during the login process.

We are going to configure the firewall first to allow access to the Management Portal and the application itself. Execute the following at the prompt
sudo ufw allow 15672/tcp
sudo ufw allow 5672
15672 is the port for the RabbitMQ Management Panel and 5672 is the default RabbitMQ Application port.

Print out the fire wall rules and they should look something like this:
pics_002.png

Next we need to allow connections to RabbitMQ. By default, RabbitMQ only accepts connections from the local host.
Stop the RabbitMQ service
sudo /opt/bitnami/ctlscript.sh stop
and edit the configuration file
sudo vi /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.config
You need to change the tcp listener to accept connections from all or certain IPs depending on your environment. I setup my instance to accept connections from any IP. Change 127.0.0.1 to 0.0.0.0 to allow any IP to connect. Below are the before and after edits.
pics.png

pics_001.png

Finally, restart the RabbitMQ service
sudo /opt/bitnami/ctlscript.sh start

Next we need to configure the RabbitMQ server. We are going to do this from the Management Panel. Connect to http://<YourServerIP&gt;:15672 and use the username and password from the console splash screen.
pics_014.png
In my case it is ‘user’ and ‘jpoDZzvsrOQ2’

Once into the Management Panel, click the ‘Admin’ Tab and then click ‘Add a user’. I created a user called ‘vcd’ and entered a password and clicked ‘Add user’.
pics_007.png
The ‘vcd’ user is now in the list. Click it to configure the permissions. I used the defaults and just clicked ‘Set permission’ and ‘Set topic permission’ to allow the user to access the Virtual Host and AMQP default exchange.
pics_008.png

With that, you are all set with the RabbitMQ configuration.

Let’s get vCD connected up. Connect to the flex client (you cannot do this in the HTML5 client just yet) and go to Administration -> Extensibility and configure the AMQP Broker Settings. Below is what I configured in mine : (Remember to click the Test AMQP to ensure all is good)
pics_009.png

And now vCD is wired up to the AMQP Broker.

 

Installing the vROps Management Pack

Log into vROps as the admin and go to the Administration -> Solutions page. Click the green plus to add a solution and select the .pak file you downloaded earlier. Once the install is completed, click the Configure gears icon and enter your vCD instance details. Mine looks something like this:
pics_011.png
Make sure to test the connection to confirm all is working.

And that is it for the installation of the Management Pack.

 

Deploy the Appliance

The final step is to deploy the appliance and register the plugin with vCD. Deploy the Appliance .ova file you downloaded earlier and be sure to configure all the settings correctly during the deployment process. These settings are difficult to change after deployment and if you need to change anything, you are pretty much better off just redeploying. (As of writing this blog, deploying the appliance from the vCenter HTML5 client does not work so you need to use the flex client.)
pics_010.png

Once the deployment is done and you start the VM, you should see the URL to connect to in the console.

The final piece of the puzzle is to register the plugin with vCD. You do this from the console of the Tenant App you just installed. Connect as root and run

cd /opt/vmware/plugin

python publish.py -H vcd-dc1-001.local -u administrator@system -p <YourAdminPassword>

Note that you must use administrator@system as the username and not administrator@vsphere.local.

And that’s it! You should now be able to log into the HTML 5 vCD portal and open the Operations Manager screen. Mine looks like this:
pics_012.png