We use cookies to improve your experience on our site and to show you personalised advertising. To find out more, read our privacy policy and cookie policy

Enabling Smarter and Caring Cities

What is Versatile Intelligent Video Anaylytics?

It is a platform helping councils and businesses to improve the services they are providing to their community and customers, by transforming rich video into real-time insights, without compromising privacy.

VIVA enables a better understanding of what is happening within council areas by generating data that helps track mobility patterns and changes in social behaviour, contributing to better informed planning decisions.

  • Real-time multi-modal detection and tracking: the sensors are able to detect and track pedestrians, vehicles, cyclists, etc. This is done in real-time. The detection and tracking rely on state-of-the art artificial intelligence algorithm and can be easily extend to your need.
  • Privacy: Only metadata and relevant indicators are transmitted by the sensors, not the raw images. In addition, no image is stored in the sensor.
  • Leveraging existing infrastructures: As cities already make huge investments on CCTV systems, the solution should take advantage of the already existing infrastructures in terms of networks and cameras.
  • Scalability: As the computations are done within the sensors it does not rely on a centralized computing infrastructure and new sensors can be added at any time.
  • Interoperability: By relying on industry standard, VIVA can be integrated with different data collection and visualisation platform. In addition, adding VIVA sensors to your existing infrastructure is completely transparent.
  • Data ownership: the data generated by the VIVA platform can be streamed directly to your own infrastructure, meaning that you own your own data and do not need an additional third-party to host and manage it.
  • Flexibility: the algorithms used by VIVA can be deployed anywhere there is camera and can be updated and adapted to new usages. This include detecting new specific objects of interests, computing distances between detections, monitoring infrastructure, estimating speed of detected objects, using thermal camera, etc.
  • Security: the platform relies on a modern Linux-based system and can automatically apply the latest security patches available.
  • Leveraging the latest NVIDIA technologies: VIVA mainly relies on the Jetson edge-computing platform and Metropolis.

Versatile Intelligent Video Analytics

00:00
[Music]
00:04
CCTV networks are common in every city
00:07
to help provide the need for real-time
00:09
data of human movement and traffic flow
00:12
we created versatile intelligent video
00:15
analytics fever for short what sets us
00:18
apart from other systems on the market
00:20
is that vive protects the privacy of all
00:23
citizens as there are no facial
00:25
recognition capabilities Liverpool City
00:28
use fever to monitor the flow of
00:30
pedestrians and vehicles to help with
00:32
urban planning and monitor pollution
00:34
exposure making the city or livable and
00:37
healthy fever can monitor wildlife
00:39
movements greenspace usages graffiti
00:42
detection trolley dumping and can even
00:44
identify victims in building evacuations
00:47
for first responders fever provides
00:49
real-time data to a range of problems
00:52
enabling your business or council to
00:55
come up with long-term economically
00:57
beneficial solutions to benefit your
00:59
clients and the community
01:01
[Music]

VIVA in action: supporting councils and industry

The Liverpool Smart Pedestrian project is a research collaboration between SMART, Liverpool City Council and Industry partners. Funded by the Smart City and Suburbs Program in 2018.

The project enables Liverpool City Council to better understand the use of the city by pedestrians, cyclists and vehicles. The data collected cannot identify individual vehicles or people.

Previously, Liverpool City Council did not have data on existing pedestrian movements or behaviour as the baseline to design the future management of movement. This project seeks to take the smart city initiative to a new level of technology by monitoring pedestrian and vehicle movement without any compromise to the privacy of the people of Liverpool.

The data generated by the visual, air quality and people counter are available on the project’s dashboard.

The project was awarded the Smart City Award from the Committee for Sydney in 2018 and was also featured top international conferences, including the World Conference for Transportation Research (2019), the NVIDIA AI Conference in Sydney (2018) and the NVIDIA GPU Technology Conference in San Jose, California (March 2019).

As part of the Illawarra-Shoalhaven Smart Waterways Project, Wollongong City Council and Shoalhaven City Council are using computer vision solution developed by the VIVA team to monitor stormwater culvert blockage. The remote monitoring device consists of a camera and a processing unit to analyse in real-time images of a culvert and assess its level of blockage. For this project, the connectivity relies on the 4G network, allowing over-the-air updates.

In addition to real images from past flooding events, to gather enough data for training this AI, we developed a synthetic data generation application based on Unreal Engine, using ray tracing technologies to improve the realism of the images generated.

The City of Holdfast Bay (SA), requested a system for pedestrian and bicycle tracking along its North Brighton Promenade. Our solution, currently in use and based on a Jetson Xavier, is able to identify and locate cyclists and pedestrians, to detect trajectories of cyclists and pedestrians, as well as estimate the speed d of cyclists and pedestrians.

Campbelltown City Council and Liverpool City Council are using VIVA paired with a thermal camera to better understand the impact of material selection and greenery on urban spaces usages, including their playgrounds. The AI has been trained to understand thermal images and perform pedestrian detections. The solution runs on a Jetson TX2 paired with a Workswell WIC-36 camera. The final product is an edge-computing solution that can be easily redeployed.

Project One

Cumberland Council (NSW) is seeking to procure a traffic monitoring solution for its Granville precinct. Part of the Granville Smart Precinct Pilot project aims to better understand the vehicle mix, and frequency (including peak and off-peak periods), as well as their relationship infrastructure provision within the precinct.

This project uses edge-computing technology in order to pre-process visual information at the level of the camera. The edge-computer will use a NVIDIA Jetson embedded system to run a bespoke AI algorithm to extract anonymized information and send optimized and compact data packets through the Council’s network. Each smart camera will (1) detect individual vehicles, (2) recognize vehicle types and (3) sample anonymized number plates at a given location within the precinct.

Each smart camera will then send an allocation table to a Cloud-based or server-based solution (using 4G transmission); this table will associate number plates with anonymized vehicle identifiers. The solution will then reconcile, in real-time, entry and exit time for each identifier in the system, allowing for the estimation of individual dwelling times. As soon as a vehicle exits the area of interest, its number plate and identifier will be deleted in order to respect privacy. However, anonymized times of entry, exit and dwelling duration will be kept in the system.

Project Two

Cumberland Council (NSW) is seeking to procure a night safety monitoring solution as part of its Granville Smart Precinct Pilot project. The Council is interested in identifying potential locations where they can enhance the sense of place and the sense of security/safety by installing smart cameras (security) and lighting (safety). The solution must rely on solar power.

Our solution uses a combination of smart street lighting and smart camera. The latter uses edge-computing technology in order to pre-process visual information at the level of the camera. The motion sensor turns on the light and camera when a presence is detected.

The edge-computer relies on the NVIDIA Jetson NX to run a bespoke AI algorithm to analyse video footage and send optimized and compact data packets through the 4G network. The smart camera (1) switches on when the motion sensor detects movement in the area of interest, (2) detects the number of pedestrians in its range of vision, and (3) analyses images to detect abnormal behaviour.

The solution uses a smart solar pole mounted solution (illustrated on the left) that will not need any digging or connection to power sources necessitating certified operators. Besides, when the benchmarking period is over, this stand-alone pole can be relocated to other sites of interest.

Garbage collection truck drivers have to perform a complex chain of tasks simultaneously; safely manoeuvring the truck, operating the controls to empty the bins, monitoring the video feed and recording any contaminants seen. If the tasks of monitoring the video feed for contaminants and recording the type of contaminants are automated, the drivers can focus more on the safe manoeuvring and bin emptying.

In this project, we develop a deep neural network algorithm to detect various types of contaminants using the same video footage normally monitored by the truck driver.

Once deployed, this algorithm will start monitoring the real time video footage for contaminants, and start recording data about type of contaminants detected against the street location (latitude/longitude). The primary advantage of this system is its ability to free up the truck driver from monitoring a video footage and data entry, thus allowing him to focus on the safe manoeuvring.

The information generated by the solution can then be used to map rates of contamination for different contaminant types at street or suburb level. Such maps can then be used to prioritise areas requiring tailored education programs or other interventions such as promoting healthy competition among neighbourhoods and suburbs to reduce contamination. This type of activities are encouraged by the Senate Standing Committees on Environment and Communications (2018) as they emphasise the need to educate public about the ways to reduce contamination of recyclables.

The main output of this project is a sensor able to process locally (i.e. at the edge of the network) the live video feed of a CCTV to establish the level of threat associated with a pipeline encroachment. Processing the raw video locally instead of in the cloud or on a dedicated server has benefit to lower the required bandwidth to transmit the information generated by the sensor (only metadata and indicators instead of a continuous video stream).

A state-of-the-art deep learning algorithm is used to establish the difference between a low-level incursion e.g. a cow walking over the easement, and a high- level incursion with an excavator parked adjacent to the easement and variations in between.

Utilisation of edge technology to only provide communications when needed back to the control centre via satellite provides an efficiency in data transfer, making the system significantly economic vs continual data transfer through e.g. a digital camera and satellite uplink. The control centre would then dispatch a technician to investigate the incursion. This project is a collaboration between Fleet, SEA Gas and UOW.

This pilot project leverages Sydney Trains’ CCTVs by adding state-of-the-art and privacy-compliant AI to the existing CCTVs to detect in real-time risky situations for women. The AI will alert operators for investigation.

Surveillance cameras are everywhere and typically only used for manually investigating incidents, after it happened. Unfortunately, according to a recent survey, the majority of the woman do not feel safe to report the incidents, violence and abuse they experience in the city during the night. Our system automates the reporting of such violence, while also being able to leverage the existing CCTV network, by adding new capabilities to the surveillance cameras.

This is achieved by pairing the network of cameras with a state-of-the-art AI able to analyse in real-time live video feeds from the CCTVs. The AI will be trained to detect incidents (e.g. people fighting, a group of agitated persons, people following someone else, and people arguing, anormal behaviour, people staying at the same location for long period of time, people running) and unsafe environment (e.g. lack of lighting where there should have been). Pose estimation techniques are used to classify the actions and behaviours of people in real-time.

The system will then alert a human operator who can quickly react if there is an issue. A report of the incident is automatically generated and can be used for further improvement of the AI.

The data and reports generated by the solution can be used to help prevent the abuse and violence committed towards women after dark in public transportation.

The VIVA team developed the AI used in EVisuals’ Incident Platform. The aim was to use the CCTV network of a building to detect and count the number of people and firefighters in the building during an emergency event (such as an evacuation due to a fire).

Close