This is the user manual for the VCAedge video analytics plug-in.
This manual will describe how to license, enable and configure the features of our video analytics to detect events of interest and trigger actions to react to those events.
The VCAedge plug-in is a set of analytical tools that can be loaded onto supported cameras. It provides the means to perform advanced analytics, reduce false alerts and customize when events occur. To get started, you will need to add a license, after which you can enable the VCAedge engine and start using the features.
Before continuing make sure you are familiar with the cameras interface and have the username and password available.
This manual will describe how to license, enable and configure the features of our video analytics to detect events of interest and trigger actions to react to those events.
By default, the VCAedge plug-in is disabled. Activate it to enable the plug-in.
The tracker engine setting is enabled depending on the hardware platform and active license.
Note: The menu system will reload after each apply to reflect the features that are available, the features that are available will depend on the license that has been applied.
Rules are used to react to events within a scene and trigger actions. To manage the rules, navigate to the rules feature from the VCAedge menu.
The rules page displays a live view from the camera and allows you to add, modify or delete rules.
Use this option to show analytic data on the camera view, select Start to show the data and Stop to hide the data
Note: The burnt-in annotation feature needs to be enabled for this option to function, this does not effect the processing of analytics but the annotation requires more resource from the camera and is not on by default.
The table will show the rules that have been defined for the camera, Add can be used to add additional rules, Modify is used to change the settings on a selected rule and Delete will remove the selected rule.
Note: Rules cannot be deleted if they are linked to an action or counter. Remove these links before attempting to delete.
The type of rules available include:
Note: Rules cannot be deleted if they are linked to an action or counter. Remove these links before attempting to delete.
The presence polygon rule triggers an event when an object is first detected in a particular zone.
Note: The presence polygon rule will trigger in the same circumstances as the Enter and Appear rule, the choice of which rule is most appropriate will depend on the scenario.
The rule will create a zone and overlay it on the live view, the zone can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The presence line rule triggers an event when an object is first detected crossing a particular line.
Note: The presence line rule will trigger in the same circumstances as the direction and counting line rule, the choice of which rule is most appropriate will depend on the scenario.
The rule will create a line and overlay it on the live view, the line can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The enter rule triggers an event when an object crosses from outside a zone to inside a zone.
Note: The enter rule detects already-tracked objects crossing the zone border from outside to inside.
The rule will create a zone and overlay it on the live view, the zone can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The exit rule triggers an event when an object crosses from inside a zone to outside a zone.
Note: The exit rule detects already-tracked objects crossing the zone border from inside to outside.
The rule will create a zone and overlay it on the live view, the line can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The appear rule triggers an event when an object starts to be tracked from within a zone.
Note: The appear rule detects objects that start being tracked from within a zone.
The rule will create a zone and overlay it on the live view, the zone can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The appear rule triggers an event when an object starts to be tracked from within a zone.
Note: The disappear rule detects objects that stop being tracked from within a zone.
The rule will create a zone and overlay it on the live view, the zone can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The stopped rule triggers an event when an object has stopped in a particular zone for a pre-defined period of time.
Note: The stopped rule does not detect abandoned objects. It only detects objects which have moved at some point and then become stationary.
The rule will create a zone and overlay it on the live view, the zone can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The dwell rule triggers an event when an object is present in a particular zone for a predefined period of time.
The rule will create a zone and overlay it on the live view, the zone can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The direction rule triggers an event when an object crosses the detection line in a particular direction and within the acceptance parameters.
The rule will create a line and overlay it on the live view, the line can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: You can also adjust these settings using the on-screen controls. Click and hold inside the dotted circles and drag to your desired angle
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The removed rule triggers an event when the area within a zone has changed for the specified time.
The rule will create a zone and overlay it on the live view, the zone can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The abandoned rule triggers an event when an object is left in a zone for the specified time.
The rule will create a zone and overlay it on the live view, the zone can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The tailgating rule triggers an event when objects cross over a line within quick succession of each other.
The rule will create a line and overlay it on the live view, the line can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The counting line rule triggers an event when an object crosses the line in the direction indicated.
Note: The counting line defers from the direction rule in that each segment of the line can have a different direction defined.
The rule will create a line and overlay it on the live view, the line can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
The direction indicator shows the direction objects must take to cause an event to be triggered, segments can be configured to point in any direction required.
Note: The direction that will be used is shown on the screen as you select the options
Note: The object filtering feature is not available when using the counting line rule.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
Logical rules extend the standard rules to allow various inputs to be combined using logical expressions, this helps to reduce false events.
The rule allows you to combine other rules into a logical expression using the AND operator and can be used to filter and reduce false events.
The AND operator combines two or more rules and only fires events if all the rules have triggered. By default, a new logical rule allows two rules to be combined, for example, trigger an event when the presence polygon AND the presence line rules are true.
This is the same as a PREVIOUS operator and holds the event as true for the defined period of time. It allows an object to trigger one of the rules and then trigger the next rule within x seconds to cause an event to occur. This is sometimes known as the double knock rule.
It allows the rule to be configured to only trigger based on an objects classification (e.g. person, vehicle), any combination of the available options is possible.
Note: The object filter of the rules that are included in the logical rule are disabled.
Note: The available classifiers are different depending on the hardware platform and the installed license.
Calibration | Available classifiers of Object Filter |
---|---|
disabled | Not available (Object Filter Off) |
enabled | the classifier defined in the classification |
Tracker Engine | Calibration | Available classifiers of Object Filter |
---|---|---|
Object Tracker | disabled | Not available (Object Filter Off) |
Object Tracker | enabled | the classifiers defined in the classification |
DL Object Tracker | - | any of the available classifiers |
DL People Tracker | - | any of the available classifiers |
Note: The Face detection is only available when the DL People tracker is selected and provides additional metadata to the channel.
It provides the ability to pick up objects based on an object’s colour components that are grouped into 10 colours.
Note: The colour filter of the rules that are included in the logical rule are disabled.
Note: If object colour is turned on at BURNT-IN ANNOTATION menu, the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object.
The following applies for the notification methods selected.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
The non-detect zone can be used to exclude areas of the scene from being analysed. This can be used to reduce false triggers that can be caused by moving foliage or busy scenes.
The rule will create a line and overlay it on the live view, the line can be reshaped accordingly. Selecting a grey node will split the segment and create a more complex shape, to remove a segment select the minus sign next to a red node.
Click Save to save the current settings.
Click Cancel to return to the rules screen without saving any changes.
Counters can be configured to count the number of times a rule is triggered, for example the number of people crossing a line.
The counters page displays a live view from the camera and allows you to add, modify or delete counters.
Use this option to show analytic data on the camera view, select Start to show the data and Stop to hide the data
Note: The burnt-in annotation feature needs to be enabled for this option to function, this does not effect the processing of analytics but the annotation requires more resource from the camera and is not on by default.
The table will show the counters that have been defined for the camera, Add can be used to add a counter, Modify is used to change the settings on a selected counter and Delete will remove the selected counter
The counter will create a counter field and overlay it on the live view, the counter can be repositioned on the screen as required.
Note: A counter position can only be modified by selecting the counter and clicking modify.
A counter is designed to be utilised in the following way:
More than one counting line rule can be assigned to a counter input. This allows, for example, the occupancy of two counting lines to be reflected in a single counter or more than one entrance / exit gate to be assigned to a counter.
Note: counters should not be used for occupancy and increment / decrement at the same time.
Select Add input to show a list of available counting lines that can be added. Select the cross next to rules already added to remove them from the counter.
Note: events created by a counter will not trigger the Deep-Learning Filter, even if enabled on the channel.
Resets the counter value to zero.
Note: Remember to configure the TCP/HTTP notification actions for the action rules feature to function. Any combination of the available options is possible.
Click Save to save the current settings.
Click Cancel to return to the counters screen without saving any changes.
Camera calibration is required in order for object identification and classification to occur. If the height, tilt and vertical field-of-view are known then these can be entered as parameters in the appropriate fields. If however, these parameters are not known then we can overlay a grid to aid in the process and provide mimics to allow you to check your settings.
Enable Calibration: Used for turning the calibration feature on or off.
Height: Defines the height on the camera.
Tilt: Defines the tilt of the camera.
VFOV: Defines the vertical field of view of the camera.
Note: A correct value for the camera vertical field of view is important for accurate calibration and classification.
Unit: Used for changing the unit values between metric or imperial.
Pan and Roll allow the ground plane to be panned and rolled without affecting the camera calibration parameters. This can be useful to visualize the calibration setup if the scene has pan or roll with respect to the camera.
Note: The pan and roll advanced parameters only affect the orientation of the 3D ground plane so that it can be more conveniently aligned with the video scene, and does not actually affect the calibration parameters.
Saves any changes that have been made.
During the calibration process, the features in the video image need to be matched with a 3D graphics overlay. The 3D graphics overlay consists of a green grid that represents the ground pane. Placed on the ground plane are a number of 3D mimics (people-shaped figures) that represent the dimensions of a person with the current calibration parameters.
The mimics are used for verifying the changes you make to the calibration settings and represent a person 1.8 metres tall. The mimics can be moved around the scene to line up with people or objects of a known size to aid in the calibration process.
Position the mimics on top or near people within the camera scene.
Note: During the calibration process, as you change settings, you may need to reposition the mimics.
Entering the correct vertical field-of-view is important for accurate calibration, the following table shows pre-calculated values for vertical field-of-view for different sensors. If the table does not contain the relevant parameters, use the mimics to adjust the settings.
Focal Length(mm) | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
---|---|---|---|---|---|---|---|---|---|
CCD Size (in) | CCD Height(mm) | ||||||||
1/6" | 1.73 | 82 | 47 | 32 | 24 | 20 | 16 | 14 | 12 |
1/4" | 2.40 | 100 | 62 | 44 | 33 | 27 | 23 | 19 | 17 |
1/3.6" | 3.00 | 113 | 74 | 53 | 41 | 33 | 28 | 24 | 21 |
1/3.2" | 3.42 | 119 | 81 | 59 | 46 | 38 | 32 | 27 | 24 |
1/3" | 3.60 | 122 | 84 | 62 | 48 | 40 | 33 | 29 | 25 |
1/2.7" | 3.96 | 126 | 89 | 67 | 53 | 43 | 37 | 32 | 28 |
1/2" | 4.80 | 135 | 100 | 77 | 62 | 51 | 44 | 38 | 33 |
1/1.8" | 5.32 | 139 | 106 | 83 | 67 | 56 | 48 | 42 | 37 |
2/3" | 6.60 | 118 | 95 | 79 | 67 | 58 | 50 | 45 | |
1" | 9.60 | 135 | 116 | 100 | 88 | 77 | 69 | 62 | |
4/3" | 13.50 | 132 | 119 | 107 | 97 | 88 | 80 |
Focal Length(mm) | 9 | 10 | 15 | 20 | 30 | 40 | 50 | |
---|---|---|---|---|---|---|---|---|
CCD Size (in) | CCD Height(mm) | |||||||
1/6" | 1.73 | 11 | 10 | 7 | ||||
1/4" | 2.40 | 15 | 14 | 9 | 7 | |||
1/3.6" | 3.00 | 19 | 12 | 11 | 9 | 6 | ||
1/3.2" | 3.42 | 21 | 16 | 13 | 10 | 7 | ||
1/3" | 3.60 | 23 | 20 | 14 | 10 | 7 | 5 | |
1/2.7" | 3.96 | 25 | 22 | 15 | 11 | 8 | 6 | |
1/2" | 4.80 | 30 | 27 | 18 | 14 | 9 | 7 | 5 |
1/1.8" | 5.32 | 33 | 30 | 20 | 15 | 10 | 8 | 6 |
2/3" | 6.60 | 40 | 37 | 25 | 19 | 13 | 9 | 8 |
1" | 9.60 | 56 | 51 | 35 | 27 | 18 | 14 | 11 |
4/3" | 13.50 | 74 | 68 | 48 | 37 | 25 | 19 | 15 |
If the camera height is known, then it can be entered. If the height in not known then it is recommended to obtain an accurate reading otherwise estimate it based on known object heights in the scene.
Note: A correct camera height measurement is required for accurate video analytics
If the tilt angle of the camera is known, then it can be entered. If the tilt angle is not known then estimate it and use the mimics as a guide to confirm and change as required.
The objective is to ensure the mimics match people or other known objects in the scene, both the height and angle of the mimic should represent the objects in the scene and adjustment of the height, tilt and vertical field-of-view may be required to achieve this.
Click Apply to save your changes.
When the calibration features have been defined, objects that are detected are assessed and assigned to one of the classifiers listed in the classification section. it has been preprogrammed with the most commonly used classifiers but these can added to or deleted as the scenario requires.
Use the add, modify and delete options to change these settings.
Note: The calibration process must be completed before objects can be classified.
Name: Defines the name of the new classifier.
Min. Area: Defines the minimum area for the new classifier.
Max. Area: Defines the maximum area for the new classifier.
Min. Speed: Defines the minimum speed for the new classifier.
Max. Speed: Defines the maximum speed for the new classifier.
Note: When creating or modifying classifiers, avoid overlapping parameters with other classifiers as this will cause the analytics engine to incorrectly identify objects.
The Burnt-in Annotation feature allows analytical data to be burnt in to the raw video stream of the camera. Annotations can include tracked objects, counters and system messages.
Note: - To display object parameters such as speed, height, area and classifications, the device must be calibrated. - The stream intended for use with burnt-in annotation must have a resolution lower than 1920x1080, this is a limitation of the camera hardware. - In order for the show annotation function of Rules and Counters to work, the burnt-in annotation feature must be on and configured.
Note: The streams resolution must be lower than 1920x1080.
Note: The calibration process must be completed for information such as speed, height, area and classification to appear.
The TCP notification sends data to a remote TCP server when triggered. The format is configurable with a mixture of plain text and tokens. Tokens are used to represent the event metadata that will be included when a rule is triggered.
Note: Changing this setting will turn on/off the rules ability to send notifications through this action. Rules can still show a link to this notification but the notification will not occur when the rule triggers.
Note: Tokens are replaced with event-specific data at the time an event is generated and includes information of the event that triggered the notification.
Note: See the section titled Tokens for full details about the token system and example templates.
Note: The available tokens can be selected from the drop-down menu below the query window.
The HTTP notification sends a HTTP request to a remote endpoint when triggered. The URL, HTTP header and message body are all configurable with a mixture of plain text and tokens. Tokens are used to represent the event metadata that will be included when a rule is triggered.
Note: Changing this setting will turn on/off the rules ability to send notifications through this action. Rules can still show a link to this notification but the notification will not occur when the rule triggers.
Specifies the body of the HTTP request, this can be a mixture of plain text and any supported tokens which will be replaced with event-specific data at the time an event is generated.
Note: See the tokens topic for full details about the token system and example templates.
Note: Tokens are replaced with event-specific data at the time an event is generated and includes information of the event that triggered the notification.
Note: See the section titled Tokens for full details about the token system and example templates.
Note: The available tokens can be selected from the drop-down menu below the query window.
The Tamper feature is intended to detect camera tampering events such as bagging, defocusing and moving the camera. This is achieved by detecting large persistent changes in the image.
Note: The option will reduce sensitivity to genuine alarms and should be used with caution. Remember to Apply changes for them to take effect
Click Apply to save the current settings.
The advanced section contains settings relating to how the analytics engine tracks objects.
Note: In most installations the default configuration will apply.
Note: Supported settings are different depending on the tracker engine.
Abandoned Threshold: Defines the amount of time an object must be classed as abandoned or removed before an Abandoned / Removed rule will trigger.
Stationary Hold-on Time: Defines the amount of time an object will continue to be tracked and classified once it becomes stationary.
Minimum Object Size: Defines the size of the smallest object that will be considered for tracking.
Sensitivity: allows the object tracker to be tuned to ignore movement below a certain threshold. Combined with the Display Foreground Pixels burnt in annotation, which visualises the area of the scene the object tracker is detecting movement, this value can be adjusted to filter out environmental noise. The default setting is 4.
Mid-bottom In mid-bottom mode the detection point for each object is located along the bottom line of the bounding box, in the centre.
Note: Changing the detection point that is used by the system can effect the point at which objects will trigger an event.
Click Apply to save the current settings.
Click Apply to save the current settings.
Click Clean up to perform a clean up process.
The facial detection feature recognises the face of the person who triggers the configured rule, but only if the triggered individual is in the face profile. It is split into two sections: live feed, and search.
Note: Face detection works in conjunction with a best shot process in order to provide the best image of the face for profile matching. It is advised that the Presence-Polygon rule is used when using this feature. The Presence-Polygon rule produces data for the period the object is present in the zone and allows the best shot to obtain the best image of the face.
Refer to the section in rules titled Presence-Polygon for further details on configuring the Presence-Polygon rule.
The live feed updates every 5 seconds to show the most recently discovered faces. The discovered faces are accompanied by information such as their age, gender, name, and group. Names and groups are only visible in the face profile for persons who have registered. Clicking the monitoring button on the search page takes you to the live stream. page.
Use this option to show analytic data on the camera view; select Start to show the data and Stop to hide the data.
Start: Shows data
Stop: Hide data
Note: The burnt-in annotation feature needs to be enabled for this option to function, this does not effect the processing of analytics but the annotation requires more resource from the camera and is not on by default.
Recognised faces are stored in the camera. This information can be searched using several filters.
It allows users to search for faces detected and stored in the Camera DB based on various search criteria below.
0-17, 18-29, 30-44, 45-64, and 65+
Search: Perform a search against the defined criteria.
The face profile is to identify and classify the detected face, you need to create groups and profiles.
Select import file: Select a file to import
Import: Import file
Export: Export file
Select Enable On.
Select DL People Tracker.
Click Apply.
Add a new rule.
Select the Presence Polygon rule, and adjust the zone to cover the whole area.
Select person and Face Detection under Object Filter.
Click Save.
Note: It is not possible to delete any profile that belongs to the selected group.
Note: The face image is available in PNG or JPG format with a minimum resolution of 150x150
pixels.
To take advantage of the VCAedge video analytic features a licence is required.
In many cases, the VCA analytic features on a camera are pre-activated in the factory and further activation is only necessary to enable additional functionality. There are 3 different methods of activation, token, pre-activation and activation code.
An activation code is linked to the cameras hardware configuration and is not transferable.
To manage activation codes, navigate to the license feature from the VCA menu.
Note: Please refer to your license distributor for additional licenses.
Note: Multiple licenses can be applied to a camera but only one license can be active at any one time.
Click Assign to assign the selected license for use with analytics.
Note: The menu system will reload to reflect any changes the license has applied. This may result in features not being available
Click Delete to delete the selected license from the camera.
Note: deleting the active license from the camera will remove all analytic features. Any configured rules will be available when a new valid license is applied.
For more information on the complete range of additional features available, please visit VCA Technology
Tokens are used within actions events such as TCP and HTTP and are automatically filled in with the metadata for the event. This allows the details of the event to be specified in the message that the action sends, e.g. the location of the object, type of event, etc.
Below is a list of the available tokens along with a description of the data they will provide.
The name of the event
The unique id of the event
The type of the event. This is usually the type of rule that triggered the event
The status of the event
The event time in the ISO 8601 format. An example of this would be:
Start time (ISO 8601 format): {{iso8601}} Start time (ISO 8601 format): 2017-04-21T10:09:42+00:00
The event time in Unix time stamp format. An example of this would be:
time: {{time}} time: 1582308244.376
The IP address of the device
The hostname of the device that generated the event
The event time in the format DD MM D HH:MM:SS YYYY 'Tue Jan 1 12:00:00 2019'. An example of this would be:
time: {{datetime}} time: Tue Jan 1 12:00:00 2019
The zone ID of the event
The rule ID of the event
The bounding box of the object
The object class of the object triggering the rule
The MAC address of the device
An example for using tokens is given below:
Event #{{id}}: {{name}} Event type: {{type}} Start time (ISO 8601 format): {{iso8601}} time: {{time}} Device: {{host}}: {{ip}}: {{mac}} Object bounding box: {{bb}} Classification: {{objclass}}
This would produce the following text:
Event #350: My event name Event type: presence Start time (ISO 8601 format): 2017-04-21T10:09:42+00:00
time: 1492769382 Device: Camera: 10.0.5.2: AB:CD:EF:GH:01:02 Object bounding box: [45251:12069:14004:8563] Classification: Vehicle