+49 911 42468830 anfrage@soeldner-consult.de
Söldner Consult als offizielle Google Cloud Trainer zertiziert

Söldner Consult als offizielle Google Cloud Trainer zertiziert

Mit der Google Cloud nimmt Söldner Consult neben Amazon Web Services (AWS) und Microsoft Azure den dritten großen Cloud-Provider ins Trainings- und Dienstleistungs-Portfolio.

„Wir freuen uns sehr über die Partnerschaft mit Google, durch die wir unsere Services im Cloud-Umfeld noch weiter ausbauen können. Unseren Kunden steht nun ein einmaliges Dienstleistungsangebot für ihre Cloud-Projekte zur Verfügung“, erklärt Prof. Dr. Jens-Henrik Söldner, Geschäftsführer, und neben Dr. Guido Söldner ebenfalls offizieller Google Cloud Trainer.

Das Google-Trainings-Programm sieht drei unterschiedliche Lernpfade vor:

  • Cloud Infrastruktur Track: Implementierung, Bereitstellung, Migration und Wartung von Anwendungen in der Cloud
  • Data & Machine Learning Track: Design, Erstellen, Analysieren und Optimieren von Big Data Lösungen
  • Application Development Track: Entwicklung von Cloud-Anwendungen

Söldner Consult freut sich, Kunden in allen drei Bereichen unterstützen zu können.

Select instance types from IaaS Blueprints

Select instance types from IaaS Blueprints

Requesting Machines with vmware vRealize Automation is not very complicated, but sometimes we want to make it even more simple. We could for example  let the user choose only if the size of the machine is small, medium or large instead of decide exact memory or cpu values.

We could do this by changing the BuildingMachine Workflow of vRA with the vcac Designer, but I like the vRO very much, so let’s try it with that.

 

First we have to create a new properties dictionary for the user in vRA under „Infrastructure->Blueprints“. For this example we build a dropdownlist with the values “small, medium, large”. Let’s give it an unique name, like “virtualMachine.custom.size”.

Next we add it to a property profile and add this to our target blueprint. When we test it, we should see the dropdownlist now in the Request screen, but of course it doesn’t do anything yet.

Now we continue with the orchestrator. There is a built in workflow called “workflow template” under “Library->vCloud Automation Center->Infrastructure Administration->Extensibility”. (Make sure, you registered vRA with your Orchestrator.) This template simply gets all custom properties from vRa, so we make a duplicate of it as starting point for our workflow.

For now, in this workflow there is only an scriptable task. Let’s modify it to get our property:

var size = "medium";
for each (var key in vCACVmProperties.keys) {
switch(key)
{
case " virtualMachine.custom.size " :
size = vCACVmProperties.get(key);
System.log("Found virtualMachine.custom.size: " + size);
break;
}
}

Now we can set our environment:

if (size != "")
{
switch(size)
{
case "small" :
memory = "512";
cpu = "1";
break;
case "medium" :
memory = "1024";
cpu = "2";
break;
case "large" :
memory = "2048";
cpu = "4";
break;
}
}

The variables “memory” and “cpu” in this case are output parameter. For now they are hardcoded, of course you could get them elsewhere.

In the next step, we need to set existing custom properties in our future virtual machine. Let’s get a workflow element from the toolbox and look for the workflow “Create/update property on virtualMachine Entity” under “Library->vCloud Automation Center->Infrastructure Administration->Extensibility->Helpers”. Then set the input parameter:

“Host” and “virtualMachine entity” we can simply set our existing ones. For all the Boolean values we set “false” and now we can set “PropertyName” to “VirtualMachine.Memory.Size” and “PropertyValue” to our “memory” variable from above”.

The same again for cpu with the “PropertyName”: “VirtualMachine.CPU.Count” and our “cpu” variable.

Our example workflow is now done, we can save and close it. The last task to do is to register it at our blueprint. For that, we look for the workflow “Assign a state change workflow to a blueprint and its virtual machines” again in the “Extensibility” folder. Start it an choose your vRA instance and the workflow stub “BuildingMachine”, then your blueprint and last our just built workflow.

Back in vRA you can test it now. If you get a machine from your blueprint, you should see our workflow working in vRO. If you look at the details of your new machine, you will not see the new values, because vRA doesn’t realize them, but if you check the machine itself, you should see the updated values.

Installing vRealize Orchestrator (vRO) Puppet plugin

Installing vRealize Orchestrator (vRO) Puppet plugin

It’s very simple to manage VMs with VMware vRealize Orchestrator. However, if you are looking for further automation, you could use the configuration management tool Puppet from Puppet Labs. All machines you want to manage by Puppet need an agent installed and if you want an automated installation of this agent, the new vRealize Orchestrator (vRO) Puppet plugin comes in handy.

First you have to download the Plugin here.

Now you log into the vRo configuration website ( https://x.x.x.x:8283 where x.x.x.x is the IP of our vRo) and in the „General“ category under the „Install Application“ tab you select the .vmoapp file that you have just downloaded and click install. Now you should find it in the Plugin Category with the message „Will perform installation at next server startup.“. So in the Startup options Category we restart the server (not only the service) and that’s it.

If we start our vRo client now, you will find some additional workflows in the „Library“ folder under „Puppet“. Before you can use the automatic agent installation now, you need to register the Puppet Master. There are some prerequisites to be followed:

  • Verify that Puppet Enterprise 3.7.0, Puppet Enterprise 3.3, Puppet Open Source 3.7.1, or Puppet Open Source 3.6.2 is installed.
  • Verify that you can connect to the Puppet Master using SSH from the Orchestrator server
  • Verify that the SSH daemon on the Puppet Master allows multiple sessions. The SSH daemon parameter to support multiple sessions on the Puppet Master is in the configuration file /etc/ssh/sshd_config. The session parameter must be set to MaxSession=10.

If you don’t have a Puppet Master installed yet, you can download a Learning VM here. In the package is also a detailed documentation how to install/import it and what credentials and names are used.

Now you need the IP and the credentials of the Puppet Master and start the „Add a Puppet Master“ workflow under „Puppet->Configuration“. (There are some other workflows to remove or update the Puppet Master.) If the workflow finishes without error, you have successfully registered our Puppet Master. To be sure, you can run also the „Validate Puppet Master“ workflow in the same folder.

Finally, you can use the workflows to install Puppet agents to our machines. There are two possibilities: „Install Linux Agent with SSH“ and „Install Windows Agent with Powershell“ in the Node Management folder. For the Linux Agent we obviously need SSH, that shouldn’t be a problem, and for the Windows Agent Powershell, which doesn’t exist in Windows servers below 2008. So, to use our workflows with 2003 servers, we have to install powershell first.

Additionally, powershell allows no remote access by default, so it is required to activate it on the servers by „Enable-PSRemoting“ and if the server is not in the same domain as the client (vRo), you need to install a certificate on every server and register it with powershell:

New-WSManInstance -ResourceURI winrm/config/Listener -SelectorSet @{Transport=’HTTPS‘; Address=“IP:x.x.x.x“} -ValueSet @{Hostname=“x.y.org“; CertificateThumbprint=“XXXXXXX“}

 Now, you can start the install workflow and if it succeeds, the Puppet agent is installed as a service/daemon, but not running. Next step would be the configuration of the manifests in the Puppet Master and then you will be able to start the „Configure Windows Agent with Powershell“/“Configure Linux Agent with SSH“ workflows. Now the Puppet agents are running and communicating with the Puppet Master.

Outgoing REST notifications in vRealize Operations bugs/issues

Outgoing REST notifications in vRealize Operations bugs/issues

It is well known that administrators can configure outbound alert instances within vRealize Operations (see Fig 1).
The Rest notification plugin is especially interesting, when there is another ticket system, which should receive vRealize Operations alerts. In that case, you have to write your own web service for receiving alerts from vRealize Operations. The documentation already provides sample files for XML and JSON:

 

„startDate“:1369757346267,
„criticality“:“ALERT_CRITICALITY_LEVEL_WARNING“,
„resourceId“:“sample-object-uuid“,
„alertId“:“sample-alert-uuid“,
„status“:“ACTIVE“,
„subType“:“ALERT_SUBTYPE_AVAILABILITY_PROBLEM“,
„cancelDate“:1369757346267,
„resourceKind“:“sample-object-type“,
„adapterKind“:“sample-adapter-type“,
„type“:“ALERT_TYPE_APPLICATION_PROBLEM“,
„resourceName“:“sample-object-name“,
„updateDate“:1369757346267,
„info“:“sample-info“

}

If you select application/xml, the body of the POST or PUT calls that are sent have the following format:

Within a project I had to implement such a web service. However it turned out that the documentation is wrong, there are other elements that are transmitted as well:

  • Risk
  • Health
  • Efficiency

Furthermore, there is also a bug in vRealize Operations. With the XML payload. While the content-header is correctly set to application/xml, the actual body is sent in JSON format nevertheless. To clarify things, here is the data going over the wire, when clicking on the Test button:

POST XML

Body{„cancelDate“:1425631300408,“updateDate“:1425631300408,“resourceId“:“test“,
„adapterKind“:“test“,“Health“:0,“criticality“:
„ALERT_CRITICALITY_LEVEL_INFO“,“Risk“:0,“resourceName“:“test“,“type“:“ALERT_TYPE_TIER“,“resourceKind“:“test“,“Efficiency“:0,
„subType“:“ALERT_SUBTYPE_SMART_KPI_BREACH“,“alertId“:“test“,“startDate“:1425631300408,“info“:“test“,“status“:“ACTIVE“}

——-

Header{content-length=[363], host=[10.10.1.71:443], connection=[Keep-Alive], user-agent=[Apache-HttpClient/4.1.3 (java 1.5)], Content-Type=[application/xml;charset=UTF-8]}

 

PUT XML

Body{„cancelDate“:1425631300408,“updateDate“:1425631300408,“resourceId“:“test“,“adapterKind“:“test“,
„Health“:0,“criticality“:“ALERT_CRITICALITY_LEVEL_INFO“,“Risk“:0,“resourceName“:“test“,
„type“:“ALERT_TYPE_TIER“,“resourceKind“:“test“,“Efficiency“:0,“subType“:
„ALERT_SUBTYPE_SMART_KPI_BREACH“,“alertId“:“test“,“startDate“:1425631300408,“info“:“test“,“status“:“ACTIVE“}

——

Header{content-length=[363], host=[10.10.1.71:443], connection=[Keep-Alive], user-agent=[Apache-HttpClient/4.1.3 (java 1.5)], Content-Type=[application/xml;charset=UTF-8]}

 

 

POST JSON Body{„cancelDate“:1425631319242,“updateDate“:1425631319242,“resourceId“:“test“,“adapterKind“:“test“,“Health“:0,“criticality“:
„ALERT_CRITICALITY_LEVEL_INFO“,“Risk“:0,“resourceName“:“test“,“type“:“ALERT_TYPE_TIER“,“resourceKind“:“test“,“Efficiency“:0,“

subType“:“ALERT_SUBTYPE_SMART_KPI_BREACH“,“alertId“:“test“,“startDate“:1425631319242,“info“:“test“,“status“:“ACTIVE“}

——

JSON Header{content-length=[363], host=[10.10.1.71:443], connection=[Keep-Alive], user-agent=[Apache-HttpClient/4.1.3 (java 1.5)], Content-Type=[application/json;charset=UTF-8]}

 

 

PUT JSON

Body{„cancelDate“:1425631319242,“updateDate“:1425631319242,“resourceId“:“test“,“adapterKind“:“test“,“Health“:0,“criticality“:
„ALERT_CRITICALITY_LEVEL_INFO“,“Risk“:0,“resourceName“:“test“,“type“:“ALERT_TYPE_TIER“,“resourceKind“:“test“,
„Efficiency“:0,“subType“:“ALERT_SUBTYPE_SMART_KPI_BREACH“,“alertId“:“test“,
„startDate“:1425631319242,“info“:“test“,“status“:“ACTIVE“}

——

JSON Header{content-length=[363], host=[10.10.1.71:443], connection=[Keep-Alive], user-agent=[Apache-HttpClient/4.1.3 (java 1.5)], Content-Type=[application/json;charset=UTF-8]}

Unseren neuesten iX Review zum Thema MDM hier kostenfrei lesen

Für unseren aktuellen Test im Bereich Mobile Device Management haben wir uns die Lösung Cortado Corporate Server 7.2 der Firma Cortado Mobile Solutions GmbH angeschaut. Herausgekommen ist dabei ein umfangreicher Test für das iX Magzin 02/2015. Den Artikel können Sie HIER vollständig lesen.

Warum haben wir dieses Produkt ausgewählt? Zwei Faktoren sprachen hierfür:

1. Vollständiger On-Premise Betrieb und damit für besonders hohe Datenschutzanforderungen geeignet (z.B. Behörden, Krankenhäuser, Praxisnetze, öffentliche Einrichtungen)

2. Einfachheit der Installation und Integration in ein bestehendes Active-Directory (niedrige Kosten)

Diese zwei Punkte sind auch die Punkte, die wir im Gespräch von Kunden immer wieder als Wunsch erfahren. Für Entscheider und Administratoren ist es dabei aber nicht immer einfach den Überblick über bestehende Produkte zu finden. Trotz fortlaufender Konsolidierung ist der EMM-Markt noch immer durch eine Vielzahl von Produkten gekennzeichnet und zudem schnelllebig. Letzteres rührt von der Tatsache her, dass EMM-Hersteller mit häufigen Releases mobiler Betriebssysteme Schritt halten müssen und die Branche durch viele Zukäufe ständig im Wandel ist.