Creating Palo Alto objects from Infoblox

 

Goal

As probably any administrator has experienced, administration over multiple systems can be problematic. You have to keep your naming convention consistent over all systems and it is not uncommon to get the feeling you have to execute the same change multiple times over different systems, already sounds boring right?

The answer every vendor throws at you: “But we have an API!” Sure, we need the API’s, but who is going to call the API’s? Are you going to run a script from your local machine? Deploy a server for the sole purpose of running python and curl? And who is going to maintain those scripts? Of course you could buy an orchestrator-like tool, but the step between manual labor and full-blown automation is huge, it’s more of a leap….over the grand canyon.

It would be great to have the possibility to be able to tie together different tools in your infrastructure like your IPAM and firewall and have them exchange information together. Most of the time those two will be different vendors and it will be like me trying to speak french: frustrating and useless.

Wouldn’t it be great if you could somehow tell vendor x what language to speak to vendor y? Infoblox did that in a wonderful  job on it and that’s what this post is all about.

To summarize, the goal here is to use Infoblox IPAM to administrate which host has what IP address and then create an object in the Palo Alto firewall with that information. We like a clean and tidy environment, so when we delete an object from Infoblox IPAM we will also delete it from the firewall.

Introductions

For the people who don’t know them, let’s first introduce the key players of this post.

Palo Alto networks is a next-gen firewall vendor, I won’t go to deep into the “why-they-are-so-great-and-fantastic” part, but their zone-based, application-based firewalling works great  and the interface works fantastic. For the importance of this post you must know their policies are based on objects which can be IP based host or network addresses or dns objects.

Infoblox is your DDI king, which stands for DHCP, DNS, IPAM (IP Address Management). An abbreviation for abbreviations, nice! It’s your all in one swiss army knife when it comes to any of the DDI components. We will focus on the IPAM part.

Theoretics

The concept is actually so simple I’m not even going to  make a nice Visio drawing for it, and I really like making those.
In Infoblox you create an API endpoint for the Palo Alto, this uses a session template to connect and authenticate yourself.

API endpoint information
API endpoint information
Session template selection
Session template selection

The templates are JSON files (available at the bottom) which you upload to Infoblox on the templates tab.

Templates overview
Templates overview

The next step is to use notifications, this defines the trigger conditions and when Infoblox should contact the firewall to send him some object information.

Notification conditions
Notification conditions
Notification template usage
Notification template usage

What is configured above is “Hey Infoblox, if an Host object changes in the default network view, send information to the Palo Alto using the Palo Alto actions template.”

In theory this is easy! But there is a catch, the templates are not yet available and who is going to provide them? Should it be Palo Alto or Infoblox? As I imagined to myself how the conversations with both vendors would look like I figured it would be easier to create the templates myself.

Setup

How hard could it be I thought, you just tell the Infoblox how to talk to the Palo Alto. Put in a variable here and there and done, right? How wrong could I have been.

There is documentation and there are some examples on the infoblox user forum, but it’s more like push in the right direction.

After some tries I figured I needed some more insight into what was going on between Infoblox and the Palo Alto so I used the setup below.

  • Infoblox trial vm, NIOS 8.2.1 with the DNS, DHCP and Security Ecosystem license activated.
  • Palo Alto trial VM, PANOS 5.0.6 (sorry, I don’t have a newer version available currently)
  • An Ubuntu box with Squid and tcpdump.

I’m using Squid as a reverse proxy for the Palo Alto so I could listen in on the traffic between the Infoblox and Palo Alto. And that’s also the reason I’m using HTTP, in a production environment you would use HTTPS.

Templates

As mentioned, the templates are divided in 2 categories: Session and Event templates. The session templates are used for login and logout functionality, the event templates contain the actions to be taken when a notification rule is hit. All templates are in JSON format.

A template can be divided in 2 pieces, an overall settings part and steps.
In the overall settings you can configure high level settings like maximum number of requests in a session or session duration.
The steps are pieces of JSON which describe an action to take, they are like little functions. Unless you tell otherwise Infoblox will run your steps from top to bottom. Luckily you can also point to another function after one function has executed, or stop execution of the template completely.

Session Template

The session template is pretty straightforward and doesn’t have to be complicated. For the Palo Alto it was (lucky for me) pretty easy to create the template. Since Palo Alto doesn’t provide a logout function for the API, please correct me if I’m wrong, I only had to create a login template and store the API key the Palo Alto provides.

I’ve cut the session templates in multiple pieces, there is a general session template in which you can refer to a login and a logout template. For the Palo Alto I’ve only created a login template.

Below are the templates. Because JSON doesn’t allow comment lines I’ll point out the interesting lines under the code boxes.

PA_session.json

{
	"name": "Palo Alto Session",
	"comment": "v 0.2 www.vknit.nl",
	"version": "3.0",
	"type": "REST_ENDPOINT",

	"vendor_identifier": "Palo Alto",
	"path": "/api",
	"login_template": "PA_login",
	"override_path": true
}

 

This is pretty straightforward, put in a name and some comment and which vendor the template is for. The type key defines it is a REST API endpoint. Since the API path of Palo alto is always “/api” I’ve enabled the override_path key and set the path key so this cannot be broken by user input.
The login_template key points to the PA_login template below, the value isn’t linked to the filename but the template name defined in the name key. To keep things easy you would want to try and keep the filename and template name identical.

PA_login.json

{
	"name": "PA_login",
	"comment": "v 0.2 www.vknit.nl",
	"vendor_identifier": "Palo Alto",
	"version": "3.0",
	"content_type": "text/xml",
	"quoting": "XMLA",
	"type": "REST_EVENT",
	"event_type": ["SESSION"],
	"steps": 
	[
		{
			"name": "login: remove basic auth headers",
			"body": "${XC:ASSIGN:{H:Authorization}:{S:}}",
			"operation": "NOP"
		},      
		{
			"name": "login: request",
			"parse": "XMLA",
			"operation": "GET",
			"no_connection_debug": false,
			"transport": {"path": "?type=keygen&user=${UT::USERNAME}&password=${UT::PASSWORD}"}
		},
		{
			"name": "login: errorcheck",
			"operation": "CONDITION",
			"condition": 
			{
				"statements": [
					{
					"op": "!=",
					"right": "${P:A:PARSE{response}{{status}}}",
					"left": "success"
					}
				],
				"condition_type": "AND",
				"else_eval": "${XC:COPY:{S:SESKEY}:{P:PARSE{response}{result}{key}}}",
				"error": true
			}
		}
	]
}

 

Most of the keys on the top are the same as in the PA_session template. I’ll talk you the keys that matter. The quoting key defines how to handle quoting, Infoblox recommends XMLA which is like a tweaked XML.
More info on XMLA can be found here. The way a list is parsed does seem to make more sense using XMLA. A list with only 1 value is converted to a normal value when using XML, XMLA will still have a list with one value which is nice if you are expecting a list in your code.

The event_type key identifies this template as part of the session templates.

Finally, we’ve come to our first steps! Every step is defined by the keys between the curly brackets and every step must have a name and operation key, these define the identity and describe what kind of action Infoblox has to take in the step.
This also brings us to the difficult part of the outbound API, namespaces.
The following namespaces are available:

  • C: http cookies. It supports only the DEL operation (primarily for logout purposes), but it can be used as a substitution origin.
  • Read-Only E: Event data.
  • H: http headers. Note that the assigned variables are sent in the next HTTP request and it survives the template execution.
  • Read-Only I: Template instance variables. It is set in the GUI during the creation of the filter and it also includes the endpoint variables that are set in the GUI when creating an endpoint. Note that the instance variables can override endpoint variables, if needed.
  • L: Local template variables. This name space is empty at name space startup and will not survive the template invocation.
  • Read-Only P: Previous endpoint response values (if parsing is enabled for the response.)
  • Read-Only R: Previous endpoint request http-specific return values. This includes RC, the http status code of the previous request (Example: 200), BODY, and the body of the response.
  • Read-Only RH: Previous endpoint request that returned http headers.
  • S: Endpoint session state variables. These variables survive the template invocation (it is used similar to L: name space which is not cleared at the end of the template execution.)
  • Read-Only UT: Read-only utility variables. The UT: name space contains the following read-only variables:
  • Read-Only XC: Execute a command on the variable. This results no output.

On the namespaces you can use different variable formats, these are the ones listed on the Infoblox outbound API page:

  • J: The output is in the form of JSON formatted variable. It supports deserializing lists as well as dictionaries. Note that strings will have double quotes prepended/appended when serialized with J.
  • j: The output is in the form of JSON formatted variable. It supports deserializing lists as well as dictionaries without the leading or trailing double quotes, if the variable is a string.
  • X: The output is in the form of XML formatted variable. It supports only deserializing lists, such as < item >..</item > sequence.
  • U: The output is in the form of url encoded variable. It supports deserializing lists and the output will be a string (comma separated value.)
  • A: The output will be a variable, which is as-is.
  • S: The output is in the form of a string. This is the default for JSON if you do not specify any output format. By default, even numbers will be serialized as JSON strings, meaning the output in a JSON quoted template for a numerical value of 1234 will be “1234”.
  • N: The output is in the form of numbers. For example, if the variable is a boolean, the output will be 0, 1, etc.
  • B: The output will be a boolean, that is true/false.
  • L: The length of the variable. This is supported only for lists (the length of the list) and dictionaries (the number of keys).
  • T: The type of the variable. This can be one of the following characters: ‘S’ for strings, ‘L’ for lists, ‘D’ for dictionaries, ‘B’ for booleans, ‘N’ for numbers, and ‘O’ for otherwise.

In the XC namespace you can use the following functions:

  • ASSIGN: Assigns the value to the specified variable. Note that the value assigned is in the format I/S/B:value for integer, string, and boolean values. Example: ASSIGN:variable:value.
  • DEBUG: Outputs the specified variable to the debug file (if the log level is not set to DEBUG, this will be ignored), if only the name space is used, the whole name space will be printed.
  • INC: Increments the variable value. If the value is not a number, NIOS displays an error.
  • DEC: Increments the variable value. If the value is not a number, NIOS displays an error.
  • COPY: Copies one variable into another. Example: COPY:destination:source.
  • DEL: Removes the variable. This supports only the C:, H:, L:, and S: name spaces.
  • FORMAT: Formats the value according to what is specified after the second ‘:’. Currently, NIOS supports the following formats:
    U: Converts to uppercase value.
    L: Converts to lowercase value.
  • DATE_EPOCH: Assuming that the value is a date expressed in UTC ISO 8601 date format. For example, 2016-03-13T04:50:31Z will be converted to EPOCH seconds.
  • DATE_ISO8601: Assuming that the value contains EPOCH seconds. The value is converted to a date string expressed in UTC ISO 8601 date format. For example, 1467152565 will be converted to 2016-06-28T22:22:45Z. If the variable contains milliseconds, they will be preserved. For example 1467152565.57 will be converted to 2016-06-28T22:22:45.570Z.
  • DATE_STRFTIME: Assuming that the variable contains EPOCH seconds. The value is converted to a date string with the specified format which is passed as the second parameter to the function.
  • PUNYCODE_TO_UTF-8: Assuming that the variable contains a punycode encoded domain name. The domain name representation will be converted to UTF-8 characters. Note that there might be a failure if the domain name has non-UTF-8 characters in its wire format.
  • TRUNCATE: Assuming that the variable is a string and it will be truncated as specified. The format is a number (positive or negative) followed by the letter ‘l’ or ‘r’. The number is the starting character of the string (positive will be counted from the beginning, negative will be counted from the end) and f/t defines if the characters are from, after, or to that point. For example, if a string is 12345, then 1f will produce 2345, 1t will produce 1, -1f will produce 5 and -1t will produce 1234.

In the P namespace there is the PARSE function which parses the result from an outbound API call as specified in the parse key.

Information overload? Just keep the lists above as reference when building your own steps.  Back to our own steps.

{
  "name": "login: remove basic auth headers",
  "body": "${XC:ASSIGN:{H:Authorization}:{S:}}",
  "operation": "NOP"
}

The name is an identifier but in this case also used as an description field.
The operation key sets the action, in this case it’s NOP. which means no further action is required except from executing the code in the body key.
Here the XC (eXeCution) namespace is used to assign an empty value to the H (Header) namespace. This results in the authorization headers being removed.

{
  "name": "login: request",
  "parse": "XMLA",
  "operation": "GET",
  "no_connection_debug": false,
  "transport": {"path": "?type=keygen&user=${UT::USERNAME}&password=${UT::PASSWORD}"}
}

As you could guess by the name, this step sends a login request. The parse key defines the result should be parsed as XMLA. In this step, the operation key is set to GET, this tells Infoblox a GET request should be made.
The transport key holds a list with different settings for the request, I append the GET variables to the URI and path fields which are configured in the Infoblox GUI. Because I overrule the user path value in the PA_session template this will be the URI field + path key from the PA_session template + path key in the transport key.
The Palo Alto username and password are also entered by the user when adding an API endpoint in the Infoblox interface. These can be gotten from the UT namespace.

{
  "name": "login: errorcheck",
  "operation": "CONDITION",
  "condition": 
  {
    "statements": [
      {
      "op": "!=",
      "right": "${P:A:PARSE{response}{{status}}}",
      "left": "success"
      }
    ],
    "condition_type": "AND",
    "else_eval": "${XC:COPY:{S:SESKEY}:{P:PARSE{response}{result}{key}}}",
    "error": true
  }
}

Now it get’s a bit more exciting, we get to parse the result from the request step. You’ll notice the operation key being set to condition, simply put this means there will a sort of if-statement to determine what to do. No surprise after this the condition key follows which will define what we are going to compare.  Every comparison is made in an statement key, you can have multiple statements per condition and use them in a AND/OR way.

So to determine if our login was a success let’s parse the xml returned from the Palo Alto in the P namespace.
Palo Alto returns the following xml:

<response status="success">
  <result>
    <key></key>
  </result>
</response>

To retrieve the value of the status variable we use the following code in the template:  ${P:A:PARSE{response}{{status}}}
The $ sign tells Infoblox an variable will follow.  Then identify the namespace P, followed by an A which means the retrieved value could be anything. Using PARSE we enter the XML, using single curly brackets we can navigate through the XML schema and using double curly brackets we can retrieve the value of status.

Back to the condition, we check if the value of status is not equal to “success”. The reason != is used is because of the following steps Infoblox expects you to take.
If we were to check if status equals “success” Infoblox doesn’t allow us to execute some statement if true and throw an error if false.
So we turn it the other way around, if status doesn’t equal “success” thrown the statement is true if the login fails so we thrown an error. If the condition is false the login was successful and we put the API key in the S (Session) namespace in a variable called SESKEY.

So now we are logged in, let’s synchronize some objects.

Action template

The action template is one json file, it uses the same format as the login templates with general settings in the top followed by the different steps where all the action happens.

let’s start with the top part:

PA_actions.json (top)

{
	"name": "Palo Alto actions",
	"comment": "v 0.2 www.vknit.nl",
	"version": "3.0",
	"type": "REST_EVENT",
	"event_type": [
		"NETWORK_IPV4",
		"RANGE_IPV4",
		"FIXED_ADDRESS_IPV4",
		"HOST_ADDRESS_IPV4",
		"NETWORK_IPV6",
		"RANGE_IPV6",
		"FIXED_ADDRESS_IPV6",
		"HOST_ADDRESS_IPV6"
	],
	"action_type": "Palo Alto actions",
	"content_type": "text/xml",
	"vendor_identifier": "Palo Alto",
	"quoting": "XMLA",

 

Most off the fields are already explained in the login template, so I’ll just explain the new one. The event_type key holds a list of all the actions the template can perform, currently the template only provides in actions for ipv4 host address actions.
The complete list of possible values:

  • RPZ
  • LEASE
  • TUNNEL
  • NETWORK_IPV4
  • NETWORK_IPV6
  • RANGE_IPV4
  • RANGE_IPV6
  • FIXED_ADDRESS_IPV4
  • FIXED_ADDRESS_IPV6
  • HOST_ADDRESS_IPV4
  • HOST_ADDRESS_IPV6
  • SESSION

With the exception of SESSION, which is used for login templates, these can all be put together in the event_type key to inform Infoblox about the available actions of the template. If you are familiar with Infoblox terms the list above should be pretty self explanatory.

Onward to the steps!

Remember the way the steps are executed? Top down step-by-step and, unless you tell it otherwise, all steps will be executed. Some steps in the action template create objects and another step will delete an object so it’s important to have a logical order in your steps.

I’ve divided the steps in the following groups:

  1. Selection, determine which action to perform.
  2. Action, perform the action.
  3. Commit, perform a commit on the Palo Alto to make the change effective.
  4. Quit, I don’t think this needs any explaining.

When an event triggers the template the following will happen:

  1. From the E (Event) namespace I check what has happened and call the corresponding step. If nothing matches execution is stopped so no action steps are ran unintentionally.
  2. The action step runs and if the result is good the commit step will be called.  If the result is not so good we start checking what went wrong.
  3. The commit runs and if successful the template quits.

During the creation of the template I found debugging and tracing errors to be quite problematic as almost no information ends up in the Infoblox logs. To make it easier for yourself start out with creating a template which communicates using HTTP and throw in some steps with a POST or GET action which contains useful information for your troubleshooting. You can intercept the call with tcpdump or wireshark.

Let’s look at the different group of steps.

PA_actions.json selection steps

	"steps":
	[
		{
			"name": "Start",
			"comment": "Starting step to use when jumping back to the start.",
			"operation": "SLEEP",
			"timeout": "0"
		},
		
		{
			"name": "Host_add_check",
			"comment": "Check for action.",
			"operation": "CONDITION",
			"condition": {
				"condition_type": "AND",
				"statements": [
					{
						"left": "${E::event_type}", 
						"op": "==", 
						"right": "HOST_ADDRESS_IPV4"
					},
					{
						"left": "${E::operation_type}", 
						"op": "==", 
						"right": "INSERT"
					}
				],
				"next": "Host_add"
			}
		},
		{
			"name": "Host_del_check",
			"comment": "Check for action.",
			"operation": "CONDITION",
			"condition": {
				"condition_type": "AND",
				"statements": [
					{
						"left": "${E::event_type}", 
						"op": "==", 
						"right": "HOST_ADDRESS_IPV4"
					},
					{
						"left": "${E::operation_type}", 
						"op": "==", 
						"right": "DELETE"
					}
				],
				"next": "Host_del"
			}
		},
		{
			"name": "Host_mod_check",
			"comment": "Check for action.",
			"operation": "CONDITION",
			"condition": {
				"condition_type": "AND",
				"statements": [
					{
						"left": "${E::event_type}", 
						"op": "==", 
						"right": "HOST_ADDRESS_IPV4"
					},
					{
						"left": "${E::operation_type}", 
						"op": "==", 
						"right": "MODIFY"
					}
				],
				"stop": true
			}
		},	
		
		{
			"name": "Unknown",
			"comment": "DEBUG, not sure what is happening, send out info.",
			"operation": "CONDITION",
			"condition": {
				"condition_type": "AND",
				"statements": [
					{"left": "1", "op": "==", "right": "1"}
				],
				"next": "unknown"
			}
		},
		
		{
			"name": "Exit",
			"comment": "Nothing mathched, stop execution.",
			"operation": "CONDITION",
			"condition": {
				"condition_type": "AND",
				"statements": [
					{"left": "1", "op": "==", "right": "1"}
				],
				"stop": true
			}
		},

The first step is somewhat like a placeholder, when I need to jump back to the start of the template to restart the execution I can point to this step.

The most important variables used are E::event_type and E::operation_type. These hold the type of event, meaning in which part of Infoblox did something happen, and the operation type which holds what has happened. If both variables are matched I jump to the corresponding step.

Using the condition steps I can identify the following events:

  • Creating an ipv4 host
  • Deleting an ipv4 host
  • Modifying an ipv4 host

The Host_mod_check step is bit of a dummy, as you can see the step doesn’t have a next key pointing it to a next step but has a “stop”: true which stops execution of the template. This is because editing a host in Infoblox triggers a add/delete action (depending on what you modified) and after that a modification message indicating the previous actions were a linked together.

The Unknown step is used for debugging, if you plan to use this in production you should remove it. If none of the above steps successfully identify an action I used the Unknown step  to jump to a another step and send me some more information. Under normal operation, when the Unknown step is removed the Exit step will execute and the template will stop executing.

PA_actions.json action steps

		{
			"name": "Host_add",
			"comment": "Add an object.",
			"parse": "XMLA",
			"operation": "GET",
			"no_connection_debug": false,
			"transport": {"path": "?type=config&action=set&xpath=/config/shared/address/entry[@name='${E:A:values{host}}_${E:A:values{ipv4addr}}']&element=<ip-netmask>${E:A:values{ipv4addr}}/32</ip-netmask>&key=${S::SESKEY}"}
		},
		{
			"name": "Host_add errorcheck",
			"comment": "Check for errors.",
			"operation": "CONDITION",
			"condition": {
				"statements": [
					{"left": "success", "op": "==", "right": "${P:A:PARSE{response}{{status}}}"}
				],
				"condition_type": "AND",
				"else_next": "commit_pending_check",
				"next": "commit"
			}
		},
		
		{
			"name": "Host_del",
			"comment": "Delete an object.",
			"parse": "XMLA",
			"operation": "GET",
			"no_connection_debug": false,
			"transport": {"path": "?type=config&action=delete&xpath=/config/shared/address/entry[@name='${E:A:values{host}}_${E:A:values{ipv4addr}}']&element=<ip-netmask>${E:A:values{ipv4addr}}/32</ip-netmask>&key=${S::SESKEY}"}
		},
		{
			"name": "Host_del errorcheck",
			"comment": "Check for errors.",
			"operation": "CONDITION",
			"condition": {
				"statements": [
					{"left": "success", "op": "==", "right": "${P:A:PARSE{response}{{status}}}"}
				],
				"condition_type": "AND",
				"else_next": "commit_pending_check",
				"next": "commit"
			}
		},

		
		{
			"name": "commit",
			"comment": "Start a commit.",
			"parse": "XMLA",
			"operation": "GET",
			"no_connection_debug": false,
			"transport": {"path": "?type=commit&cmd=<commit><partial><shared-object></shared-object></partial></commit>&key=${S::SESKEY}"}
		},
		{
			"name": "commit errorcheck",
			"comment": "Check for errors.",
			"operation": "CONDITION",
			"condition": {
				"statements": [
					{"left": "${P:A:PARSE{response}{{status}}}", "op": "==", "right": "success"}
				],
				"condition_type": "AND",
				"else_next": "commit_pending_check",
				"next": "Exit"
			}
		},

Above are the steps to create or delete an object on the Palo Alto. Every step is followed by an error check to make sure the API call was properly handled on the Palo Alto.

The Host_add and Host_del steps are not very complicated. Using the transport key a string is appended to the path variable which contains the GET variables.
In the Host_add step I enter the name of the object using the following code:

@name='${E:A:values{host}}_${E:A:values{ipv4addr}}'

This results in an object name consisting of the host name and ipv4 address from Infoblox. Because a host object can have multiple ipv4 addresses you get a separate object for every ipv4 address.
Alternatively it would be possible to use a DNS object in the Palo Alto but that would require the host object to be resolvable. Also determining when the object could be deleted from the Palo Alto would take more effort.

After a API call to the Palo Alto has been made the following step checks for errors.

A good result would look like something like the message below:

<response status="success" code="19">
	<result>
		<msg>
			<line>Commit job enqueued with jobid 14</line>
		</msg>
		<job>14</job>
	</result>
</response>

The status value in the response object will be matched in the error checking step. In this case the value is “success” and execution will continue normally to the commit step.

The commit step will do an API call starting a partial commit on the Palo Alto. A full commit isn’t necessary as there are only object changes. If the commit returns a success the next step will be Exit, I don’t think the function of that step has to be explained.

But if things don’t run so smoothly…:

<response status="error" code="13">
	<msg>
		<line>A commit is pending. Please try again later.</line>
	</msg>
</response>

Since the condition in the error checking step will now return false because the status value doesn’t match to “success” the else_next key will be executed instead of the next key.

This brings us to bottom part of the steps, these contain error checking and an exit step.

		{
			"name": "commit_pending_check",
			"comment": "Check if a commit is pending, if so, wait 15 seconds. If not, it is an unknown error.",
			"operation": "CONDITION",
			"condition": {
				"statements": [
					{
						"left": "error", 
						"op": "==", 
						"right": "${P:A:PARSE{response}{{status}}}"
					},
					{
						"left": "13", 
						"op": "==", 
						"right": "${P:A:PARSE{response}{{code}}}"
					}
				],
				"condition_type": "AND",
				"next": "sleep15",
				"else_error": true
			}
		},

		{
			"name": "unknown",
			"comment": "DEBUG, Don't know what is happening, send information.",
			"parse": "XMLA",
			"operation": "GET",
			"no_connection_debug": false,
			"transport": {"path": "?${E::event_type}&${E::operation_type}"}
		},
		
		{
			"name": "FinExit",
			"comment": "Stop execution of the template.",
			"operation": "CONDITION",
			"condition": {
				"condition_type": "AND",
				"statements": [
					{"left": "1", "op": "==", "right": "1"}
				],
				"stop": true
			}
		},
		
		{
			"name": "sleep15",
			"comment": "Wait 15 seconds and restart execution of the template.",
			"operation": "SLEEP",
			"timeout": "15"
		},
		{
			"name": "restart",
			"comment": "Waited 15 seconds, restart the template.",
			"operation": "CONDITION",
			"condition": {
				"condition_type": "AND",
				"statements": [
					{"left": "1", "op": "==", "right": "1"}
				],
				"next": "Start"
			}
		}		
	]
}

The first (and currently only) error checking step is commit_pending_check.
If a commit is pending on the Palo Alto you cannot make any changes such as creating objects. In PANOS 5.0.6 this isn’t possible, I believe they changed this behaviour in later versions which I will check as soon as I get my hands on a newer version.
The error code related to this specific error is “13”, the solution to this error is simple as it will fix itself when the commit is finished. All we have to do is wait so if the condition is true it will jump to the sleep15 step which pauses the execution for 15 seconds. After 15 seconds the next step will force a restart of the script execution and jump back to the very first step.

Under normal/good circumstances no errors are raised and after commit step has finished it will execute the Exit step and execution will we stopped.

Summary

Implementation

Use the following steps to upload the templates.

  1. In Infoblox go to the System tab, Ecosystem sub-tab, Templates sub-sub-tab.

    Tabs
    Tabs
  2. Click on the + button to open the Add Template screen.
  3. Select the template file and click Upload.
  4. Click the Add button to add the template to the template list.
  5. Repeat steps 2-4 for all template files in this order: PA_login.json, PA_session.json, PA_actions.json.
Templates overview
Templates overview

You can now add the Palo Alto in the Outbound Endpoint tab. Click on the + button and select “Add REST API Endpoint“.

Add API endpoint step 1
Add API endpoint step 1

Fill in the following fields:

  • URI : The URI of the endpoint, most of the times this will me http(s):\\<mgmt-ip>.
  • Name : The name of your Palo Alto
  • Vendor Type : Palo Alto
  • Auth Username/Password : This will be used to authenticate on the Palo Alto.
  • WAPI Integration Username/Password : This will be used to authenticate on Infoblox if necessary.
Add API Endpoint step 2
Add API endpoint step 2

In step 2 change the Timeout to 2 minutes, when clicking the Select Template button it should select the Palo Alto Session template automatically.
Hit Save & Close and your endpoint is configured!

The next step is to add a notification which will determine when an event is triggered. To do this, go to the Notifications tab and click the + button.

Add notification step 1
Add notification step 1

Enter a name for the notification and select the endpoint you just configured.

Add notification step 2
Add notification step 2

The current template only supports ipv4 host events so select “Object Change Host Address IPv4” from the event list.
You can enter different rules which must be matched before an event is triggered. In my setup I have only have the default network view, so the configured rule effectively generates an event for every ipv4 host change.

Add notification step 3
Add notification step 3

In the last step, select the Palo Alto actions template and hit Save & Close.

If everything is setup right object on your Palo Alto should be created when creating a new ipv4 host in Infoblox. If this is not happening check your Palo Alto system log and the Infoblox syslog messages.

Limitations

There are some limitations I’m aware of (and probably also some I don’t know about) in version 0.2.

When deleting an object from Infoblox be sure to already have removed the object from any policies or groups on the Palo Alto or the delete will fail.

Only ipv4 host objects are supported, I plan to add more object types.

Reference

2 Replies to “Creating Palo Alto objects from Infoblox”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.