Artificial Intelligence falls short of being properly defined as we ignore lots of things about Natural Intelligence itself.
Nevertheless it is clear that computer pattern recognition and learning capabilities are steadily improving. This is notably « visible » in image segmentation and recognition, strategy games, language processing, and semantic search. Google is responding to our queries with an incredible personalized touch. Additionally, self-driving cars are demonstrating extraordinary ability to mimic and better human driving.
It is fair to ask where we stand with AI as we seem to have reached a “local optimum”.
When experts are asked about how far we stand from having human like intelligences in front of us, they often relate to subjects vaguely defined like “thinking”, “feeling”, “creating”, “understanding”.
US DARPA (Defense Advanced Projects Research Agency) positions AI research on its third wave, after expert systems and machine learning, we now enter the third phase where AI systems should be able to explain their reasoning. This will be particularly interesting as cognitive systems like IBM Watson defeat our human capabilities and outreach by far our capacity to explore thousands of similar cases in the context of healthcare or legislation.
Recently, many visionaries including successful entrepreneurs and scientists have warned that “Artificial Intelligence” could be a threat to humanity.
Lots of questions are flowing from these statements.
As disembodied as this artificial intelligence can be, it would still need an infrastructure to be run and actuators to interact with the physical world.
Even in the rise of Internet of Things (IoT), there is no such system able to use the physical environment to impact the physical world. We are light years away from having a sufficiently interconnected and interoperable world that would allow a machine to take control over humanity or harm it substantially and that’s very fine.
Robotics creates lots of occasions to dream about humanoid robots that could make the nightmare a reality like in the « Terminator » saga or in the more recent series « Black Miror », « Metalhead » episode, but it can easily be argued that humanity remains by far the most prevalent danger to itself and, if intelligent robots could exist, they would probably helping us out, like in the excellent cartoon « The Iron Giant« .
It is far more realistic to conceive the machine as a suplement to our intelligence to pursue objectives like Decision support and Task automation, above all in the midst of cyberworld and data growth where human individual capacities fall apparently short of encompassing their virtual environment. In those ventures, the human operator « sits » before the screen, bridging human perception of the physical environment with computer inputs and trying to consolidate the concepts and information into one holistic mental model of reality. That is made difficult because it requires modeling physical environment and integrate digital inputs into the model.
An alternative view exists, it consists in preserving the authentic perception of physical reality by the operator (vision, audition) and superimposing computed data, this is basically called Augmented Reality(AR).
Augmented reality is an excellent environment to create a seamless interface between human and computer. Half virtual reality and half real perceptions, Augmented Reality allows Artificial Intelligence to capture human operator’s perceptions on the fly and superimpose computational results under the form of synthetic artifacts. The first examples of Augmeted Reality have appeared in the 80s, not surprizingly for the military. It was mad of specific information projections on the helmet of fighter and tank pilots, known as HUD (Head Up Displays).
As we are speaking, AR still hasn’t made it to the mainstream market. Microsoft Hololens has paved the way but the very expensive device (>3000 €) simply can not reach the mainstream. Microsoft is pushing it hard to design teams in large companies, car manufacturers, pharmaceutical companies.
Virtual Reality (VR), on the other hand is now progressing at high pace through popular products like the Oculus Quest 2 from Facebook
that delivers incredible sensations and experience sharing moments for less than 400€. The problem with VR is the “V”. Difficulties and costs of creating a digital twin of many environment is what prevents VR from being competitive in the human machine interface race. As a former R&D engineer in commercial flight simulators, back in the 90’s, I know what it takes to create and run a synthetic environment in the professional world.
Amazingly, VR seems to be “ human going into the computer”, we all remember the Tron movie, while AR is really “the computer coming into the reality”.
Personal Interactor, 2021
Furthermore it doesn’t go alone, but rather with the operator, or the operators. Another movie can be remembered at this point, “Her”. Her is an audio augmented reality, meaning a human operator assistance only by the mean of voice support.
But we can imagine many different ways to support the human operator, at least one per sense, meaning superimposed visual information, voice assistance, touch assistance, and probably in a near future smell and taste synthesis.
Dubitative ? I strongly advise to test the latest Oculus Quest 2 from Facebook, an extraordinary device that revolutionize the VR world. This is only the beginning of a global shift, an amazing apraisal to the visionary work of Ernest Cline, born in 72, who wrote in 2011 the novel « Ready Player One » featuring people playing in a matrix. This novel has produced a blockbuster in 2018, a 500+ M$ movie bu Steven Spielberg, worth seeing, well, a must see.
Is that the future, people diving into parallel realities, absorbed by mega brands, sucked by virtuality into one or more matrices. It will take long to figure out what is the best human machine interface. Elon Musk and his friends work hard on Neuralink to bind directly the operator and the computer wirelessly by the thoughts. Even Mars seems closer…
We learn as we walk and tomorrow’s solution will not be today’s dream. Innovating is like a street fight. It focuses your attention on a very unique subject and the tunnel effect narrows your sight on the subject, preventing you from seeing the entire scene, surrounding elements that may impact you and change your perception of reality. The Human Machine ecosystem is a very large one, even if we have only 5 physical senses, it is hard to tell if a voice supplement is better than a video clip or a graph to provide contextual support on a specific subject. There is a plentiful of ergonomic activities to plan in the next generation of operation centers.
A metaphor as a takeout: the wheel was invented to supplement human legs, not to replace them. Thanks to the wheel, human goes faster, farther, with less fatigue. No one ever thought the wheel would take control over the world.
Augmented Intelligence, like Augmented Carriage (the wheel), remains an artefact.
Souvent attribués à Socrate, présent aussi dans la sagesse du Bouddhisme Zen, les trois filtres ont pour but de limiter la parole pour préserver la paix.
Ce que je vais dire est-il “Juste ? Bon ? Utile ?”
Sinon, mieux vaut me taire.
La CNIL, autorité de contrôle, rend des jugements et sa parole fait autorité. Elle applique elle aussi des filtres, en l’occurrence ceux des textes légaux, mais elle porte surtout des appréciations, des risques notamment, et autorise ou interdit la mise en application de traitement des données quand ils peuvent porter atteinte aux libertés individuelles. Dans certains cas, où manifestement il y a négligence de la vie privée des personnes, la CNIL impose aussi de lourdes amendes.
Ce traitement est-il “Juste ? Proportionné ? Risqué ?”
Voilà les “Filtres Socratiques” de la CNIL, selon lesquels la biométrie par reconnaissance faciale que souhaitaient mettre en place deux établissements scolaires de la zone PACA (lycée les Eucalyptus à Nice et lycée Ampère à Marseille) n’a pas été autorisée.
Quel était la finalité du traitement envisagé ?
Ce dispositif, qui ne devait concerner que les lycéens ayant préalablement consenti, et être expérimenté durant toute une année scolaire, devait permettre d’assister les agents en charge du contrôle d’accès aux lycées afin de prévenir les intrusions et les usurpations d’identité et de réduire la durée de ces contrôles.
Autrement transposé en termes techniques, il s’agit de créer une liste blanche des lycéens autorisés, de relier leur profil biométrique à leur identifiant de badge et ainsi de détecter automatiquement si le badge présenté au contrôle d’accès correspond à son porteur. La liste blanche permet aussi d’identifier une personne qui n’en fait pas partie afin de la soumettre à une authentification. Enfin, corollaire de la liste blanche, le système peut gérer une liste noire de “persona non grata” et déclencher des alarmes lors des détections.
Les arguments de la CNIL sont sans appel: le traitement biométrique de reconnaissance faciale ayant pour but l’authentification des élèves à l’entrée des lycées est disproportionné. Un « marteau pour écraser la mouche ».
En effet, les caractéristiques biométriques enregistrées sont celles des visages des lycéens, pour la plupart, des mineurs. Mais cet argument s’étiole un peu si l’on considère les assassinats en réunion dont sont capables certains mineurs aux abords des établissements scolaires (Meurtres en Seine Saint-Denis, octobre 2018, octobre 2019) ou si l’on considère la simple statistique telle qu’enregistrée par l’ONDRP et citée dans des articles de recherche du Centre de Recherches Sociologiques sur le Droit et les Institutions Pénales (CESDIP).
Un suicide et un meurtre : rentrée dramatique en Seine-Saint-Denis
Deux drames ont marqué la rentrée en Seine-Saint-Denis, où enseignants et parents déplorent une nouvelle fois «l’abandon» de l’école par l’Etat.
Une directrice de maternelle qui se suicide en dénonçant ses conditions de travail, un lycéen poignardé en marge d’un cours d’EPS: deux drames ont marqué la rentrée en Seine-Saint-Denis, où enseignants et parents déplorent une nouvelle fois «l’abandon» de l’école par l’Etat. Vendredi dernier, Kewi, 15 ans, est mortellement poignardé à l’entrée du stade municipal des Lilas. Un jeune professeur d’EPS tente de le ranimer, sous les yeux paniqués de ses élèves.
Deux jours plus tard, trois lycéens de 14 et 15 ans sont placés en détention provisoire, protagonistes présumés d’une guerre de territoire qui a déjà fait un mort, Aboubakar, 13 ans, en octobre 2018. «Il ne s’agit pas d’un fait divers sur la voie publique, mais d’un homicide sur le temps scolaire», insiste Gabriel Lattanzio, enseignant et délégué SNES au lycée Paul Robert.
Source: Paris Match | Publié le 12/10/2019 à 13h20
La CNIL définit bien le contour de l’utilisation de la reconnaissance faciale. Cette technologie atteint un niveau d’efficacité qui la rend utilisable pour l’authentification des personnes.
Le risque essentiel sur les données biométriques est l’accès non autorisé, elles sont susceptibles d’être volées, puis utilisées pour faciliter le reconnaissance de leurs porteurs à leur insu et par des systèmes illicites ou abusifs. Disposer des caractéristiques de biométrie faciale d’une personne revient à pouvoir la reconnaître de manière quasi certaine avec des moyens informatiques.
On cite volontiers les expérimentations du crédit social Chinois dans certaines villes.
Une chose est certaine: les données biométriques sont des signatures naturelles qui ne peuvent (sauf accident ou chirurgie) changer et à ce titre elles doivent être protégées.
Corollaire important, les données biométriques peuvent être captées autant de fois que nécessaire et il est donc possible de les effacer dans le cadre d’une politique d’obsolescence, puis de les acquérir à nouveau si et quand c’est nécessaire.
Il se pose donc légitimement la question de la “dispersion” des signatures sur différents supports, de leur sécurisation physique et logique et de leur effacement au gré des utilisations.
Ces questions se posent avec d’autant plus d’acuité que le gouvernement envisage l’utilisation de la reconnaissance faciale pour l’authentification des utilisateurs de FranceConnect, le SSO pour l’accès aux services publics. Le Ministre de l’Intérieur a ainsi fait la demande auprès de la CNIL pour la mise en service d’un traitement de données biométriques de reconnaissance faciale nommé ALICEM (Authentification en ligne certifiée sur mobile ).
La CNIL a émis des réserves sur la conformité avec le volet “Consentement éclairé” de l’application avec le RGPD, questionnant ainsi la licéité du traitement.
L’introduction d’ALICEM crée un précédent dans la mesure où l’authentification par reconnaissance faciale est la seule offerte par cette application. Il est toujours possible de s’authentifier sur FranceConnect avec le mot de passe actuel.
Certains techniciens ont noté aussi l’absence d’authentification à deux facteurs (2FA) pour FranceConnect. Cela vaut pour l’authentification par mot de passe ou pour l’authentification par reconnaissance faciale. C’est un point intéressant car si il est souhaitable d’utiliser le 2FA pour confirmer l’authentification lors de la connexion à un site internet par un envoi de SMS sur le portable de l’utilisateur identifié, on voit moins bien l’utilité d’un SMS envoyé sur le portable qui vient d’effectuer la reconnaissance faciale de la personne.
D’autres auront regretté que l’application ALICEM ne propose pas une méthode alternative d’authentification. Mais elle existe déjà et s’appelle FranceConnect. ALICEM est conçue pour ajouter l’authentification par reconnaissance faciale.
On le voit à travers ces deux exemples, il existe plusieurs échelles d’utilisation de la biométrie de reconnaissance faciale et un risque essentiel de fuite des données que seul un PIA (Privacy Impact Assessment ou Étude d’Impact sur la Vie Privée) dûment effectué et une concertation avec le régulateur peuvent arbitrer.
What is a good video surveillance recording system ? Is it simply a system that provides storage capacity, reliability and redundancy ?
Certainly not. At least not only. Indeed these qualities are necessary but the ability of the recording system to help finding near real-time events or forensic events is key. Too much time is lost searching for specific excerpts and too few time is available when dramatic events happen.
That is where video analytics come in the loop.
Dealing with video analytics, one often refers to real-time alarm detection based on image analysis. A wide variety of algorithms have been proposed, some of them running on dedicated network appliances, reading a video stream and analysing it on the fly, some others directly running on the IP encoders and cameras boards.
Nevertheless, a different approach of video analytics is of high interest and has been proposed by some high end video surveillance vendors : the forensic analytics. While the algorithms remain the same, they are applied to a recorded stream instead of a live feed. Hence it is not about being alerted in real-time but about finding relevant video evidences using analytics as « filters » that help isolate images of interest.
While forensic analytics prove to be an efficient ally in chasing relevant images in a video archive, it should not be forgotten that they usually require a lot of processing power. That is why, just like in the real-time analytics use case they are two different system architectures. The on-board or embedded forensic analytics run directly on the NVR (Network Video Recorder) hosting the recorded video. This architecture is limited by the CPU power of the NVR server and that is one of the reasons why very few manufacturers propose NVR with embedded analytics and if they do, limit the number of concurrent filters. On the other hand, the dedicated video analytics server is hosted on a specific server, gets its video streams from the VMS (Video Management Server) and sends ananlysis results back to the VMS.
The latter architecture is way more scalable and maintainable but it has a major drawback, the NVR capacity to serve multiple concurrent stream requests from multiple analytic algorithms.
Industrial solutions like Agent VI, in cunjonction with VMS like Genetec Omnicast promise
« Apply an unlimited number of analytics rules of any kind and combination to each camera in parallel to the video recording. »
This is a very interesting value proposition, indeed.
Nevertheless, the bottleneck of such architecture remains the source of the video that will feed the analytics server. While the analytics servers cans be parallelized using as many of them as required, the video storage cannot be replicated. Hence, the solution to this resides in a storage system with massive concurrent access capacity, which opens a completely new field of investigation for the security systems architect. The industrial NVRs, inherently limited both by their bandwidth and by their processing power have to be replaced by a new storage system, able to centralize storage of huge numbers of streams and on the other hand to distribute a large number of video feeds to a large number of analytics servers.
This is the challenge that a company like Quantum has taken up with its StoNext system. StorNext is offering a filesystem interface with impressive scalability and performance, very well suited to video surveillance stringent requirements.
The StorNext Storage Area Network uses fiber channel block level technology and delivers a maximum throughput of 8Gbps that is instrumental to safe and efficient storage of thousands of cameras. It has been tested successfully during a certification process with Milestone VMS
Not only does StorNext simplify and streamline the management of the video storage, furthermore it allows moving transparently to a whole new paradigm of forensic and near real-time parallel video analytics that is urgently needed to absorb the gigantic video loads required by anti-terrorism and homeland security.
Le « Phygital », ou le mélange du physique et du digital dans une gamme d’objets connectés toujours plus nombreuse, est porteur des plus belles innovations au service de l’humain mais aussi des risques inhérents aux technologies de l’information.
La France a une belle longueur d’avance, avec la Loi Informatique et Liberté de 1978 dans le domaine de la protection des libertés individuelles, mais on comprend mieux avec le récent règlement Européen pour la protection des données (RGPD/GDPR) a quel point les risques qui pèsent aujourd’hui sur nos données, pourraient entrainer des risques bien pires sur nos environnements physiques.
Il en va ainsi d’une usine qui s’arrête, d’une banque qui ne peut plus payer, d’une centrale électrique qui s’emballe ou d’un ordinateur qui refuse de démarrer.
La Cybernétique, ou communication dans les systèmes complexes vivants ou inertes, c’est l’art du réseau, depuis les grands câbles sous-marins jusqu’aux réseaux personnels Bluetooth, en passant par les fibres optiques et les transmissions hertziennes.
Le Cyber, cet environnement ultraconnecté, nous contraint à penser notre sécurité globale dans un treillis d’objets connectés où chaque ouverture sur le réseau est une opportunité d’obtenir un service et un risque d’être épié. Cyber aujourd’hui, physique demain, telle pourrait être la menace.
Cette présentation tente de faire le point sur l’état de la menace qui pèse sur les acteurs institutionnels comme sur les particuliers et donne des éléments de réponse parmi ceux accessibles aujourd’hui.
Dans la seconde partie, on détaille le RGPD, un des éléments de réponse, dans sa double dimension technique et juridique, en le mettant en opposition avec le CLOUD act décrété à peu près simultanément par les Etats -Unis d’Amérique.
“We are moving to a new age of predictive policing where officers will work alongside machines to gather and analyse data to support police investigations and operations, ultimately helping to prevent and reduce crime and enhance security.”
What is really announcing the French Prime Minister decision to systematize bodycams usage by French police forces, else than the new era of video surveillance, in sync with mobility new usages ?
Sensors miniaturization, video encoding progress and SD card storage capacity increase have made possible mass sales of high definition action cams. Like extreme sports adepts, cars, drones and even policemen now wear these video witnesses able to record hours of video and sound (24h a minima on a 32GO SDcard and soon twice as much if H265 meets the expectations). In the case of bodycams, the video is captured and recorded on the device, it has to be offloaded onto a large capacity external storage to be preserved and analysed. That is considerably increasing the volume of security video already needed for traditional stand-alone security cams. As an example, the 4500 bodycams that are supposed to be used by the French police over the next few months will require another round of 3 petabytes (3,000 terabytes).
This milestone of mobility in digital video illustrates in fact a real paradigm shift for video surveillance, the shift from a network centric video surveillance (IP video surveillance) to a storage centric video surveillance, storage being the indispensable source to preserve video and feed subsequent streaming and processing. This step of course holds a few technical challenges related to managing enormous amounts of centralized video and analysing it in preventive indexing or post event investigation. Nevertheless, it does not constitute the real revolution, the one that really changes the way people will operate. That revolution will be brought by high bandwidth wireless connectivity (LTE and up) that will allow real-time transmission of video to the operations centers from mobile wireless geolocated cameras on the field.
This post describes the transformation happening in the technologies and operations of video surveillance while ip video surveillance is still ongoing. It explores mid-term and long-term consequences of video surveillance feeds mobility as they multiply the surveillance systems potential, providing them with adaptability, scalability and interoperability.
Two recent examples will illustrate our talk from the recent terrorist attacks in Paris: first, the French Prime Minister to extend the use of body cameras worn by policemen; then the use of a DJI Phantom quad-copter drone by special police forces who assaulted and killed the three ISIS terrorists entranched in an appartment downtown Saint Denis, on the 18th of November 2015.
We will end this prospective study by a technology review, in an attempt to mesure the real expectations brought by the generalization of new usages of mobility, in terms of systems. As usually in security, the technologies are not usable until integrated into existing applications in a global and systemic view, centered around the human operator. To his extent, we will review the impact of new usages on the three technology pillars of video surveillance: sensors, network and infrastructure.
Eventually, we will conclude by drawing a few perspectives toward this new video surveillance, the one which appears by transparency in the concept of co-production of security by private security firms and public authorities. We will unveil that the near future is rich of opportunities for companies that will understand the technology challenge, not for downsizing the security forces but to complement their action. It is albeit solely a prospective view and very abstracted from its legislative context, which remains coexisting to any development in the public security field.
The conditions of the rise of the new revolution of video surveillance
The detonating cocktail of innovation in Information Technologies contains three mandatory ingredients : miniaturization of sensor technologies, increase in storage and processing capacity and Internet connectivity. This cocktail explains the fuss around the Internet of Objects (IoT), a new name for the well known concept of machine to machine (M2M). In the camera field, systematic use of one or two sensors per smartphone has brought new usages for shooting picture and video, for uploading them, for sharing them, that primarily only concerned private users but which ended up in new professional security applications and services.
Smartphone, the first mobile connected camera
Scientific community recently began studying impact of new video sharing uses in the context of urban security. The paper « UbiOpticon » from Urban Informatics Research Lab of Queensland U. in Australia and from Urban Computing and Cultures Research Group of Oulu University in Finland, describe an analysis of a « participative » video surveillance.
It is a fact, historically the first Internet Objects have actually been smartphones. Each of them is equiped with one or two sensors and loaded with Internet connectivity, storage and processing capacity. For a while now, they are many more cameras sold embedded in smartphones than in any other form. They are more smartphones than human being on earth yet… Beyond smartphones, video sensors miniaturization turned them in an embeddable comodity and gave birth to a new generation of « video augmented objects », able to record and sometimes transmit images: drones, wearable accessories (GLASSES, BODYCAM) and recorders (DASHCAMS, LIGHTCAMS). Mobility of these new cameras, but also their relative position to the operator (policemen bodycam, smart glasses, drone cameras) draw new perspectives because user does not necessarily stream selfies but rather shares what he or she actually sees.
What I see is what you see.
New services based on mobile video capture and sharing
Concurrently with the massive smartphones deployment, we have seen since 2006 the emergence of new video broadcasting services, in store and forward mode, but also in real-time. The usual video conference and video surveillance application segments have been completed with new uses geared toward video sharing.
The first applications able to share video from smartphones in real time over the Internet appeared a few years ago with companies like Qik created in 2006, which pioneered this field. Qik proposed an app freely available both on App store and Play store, to capture and send a video stream from a smartphone to a central server wher the video was recorded and broadcasted to a selected panel of friends, or to the public. Recorded videos could be viewed in private or public mode. As a comparison, the famous and excellent Facetime service from Apple was only introduced by Steve Jobs 4 years after, in 2010 and would only allow two persons handling Iphones to hold a simple video conference meeting.
This comparison illustrates pretty well the rupture that cameras mobility and connectivity introduces compared with historic usages of video that are conferencing and surveillance, where cameras tend to be fixed and users in restricted number.
In the new « mobile video », the main objective is sharing. Real-time sharing or delayed sharing with a more or less large number of persons, in private or in public, opening new opportunities of services and raising the question of the range, the use and the security of such applications.
Qik has now vanished, acquired and ingested by Microsoft Skype in January 2011. The app was visionary and some other companies like Keek or Ustream (mentionned here under) have taken over the model today. This is this particular usage of video sharing in real time, named « Life Casting » in the US, which is brought by the major of video over Internet, Youtube, allowing users to operate a real « chain » on the Internet, with its live sessions and its recordings.
In the Finish experiment detailed in the scientific article UbiOPticon, there is notably the Ustream service, used as one of the modules of the software architecture to stream videos from smartphones and mix them with video coming from surveillance IP cameras (webcams). Video streas are conveyed to the central Ustream server which records them and broadcasts them on the monitoring screens located purposedly in public spaces (bus stop) for the needs of the experimentation.
The last avatar of this application kind is Periscope, an add-on for twitter, made from a start-up acquired by twitter in March 2015. The start-up tagline is saying it crisp and loud : « See the world through someone else’s eyes.».
Loft story, Secret story, Big brother have also been precursors of this new wave of video sharing, albeit using the TV networks for being broadcasting fixed cameras located in the studios.
Today, authorities begin to use cameras mounted in cars and connected to the City control centers. Police forces will soon be equipped (in France) with body cameras which video will be recorded on the camera SD card before being offloaded to a central archival system to preserve the recordings. The legal frame must of course be defined before and clearly as the risk to infringe the privacy and individual liberties rights is high with such pervasive technologies. Nevertheless, the generalization of such video witnesses, already largely used in North America, is a huge trend. This has created the increased need for powerful and unlimited storage management systems, able to record and process all the mobile video.
Indeed, the number of cameras has sky rocketed and the nobody hopes to be able to view them all live. At best, we can analyse in real-time with sophisticated algorithms (video analytics). It is then mandatory to use a storage system not only capable of storing safely every camera but also to stream efficiently the recorded video up to the analytic processes and human operators. To this extent, the role of the storage system becomes central, relegating the network to a comodity role. This is the advent of the storage centric video surveillance, after the rise of the network centric video surveillance. We will see in the following that this new paradigm of storage centric video surveillance is accompanied by the birth of a new activity of « mobile video surveillance » which is embryonnic today but will most certainly represent the largest part of security systems in a few years from now. If today most of the cameras are stand-alone, we can foresee that when police and private security forces are fully equipped with body cameras and smart glasses, fixed cameras will become a minority as security video feeds.
As an early signal, one can notice that, during the recent tragic events that struck Paris, video and photo testimonials from the attacks have been used systematically by the media. That is how the social networks are taking a very important part in the homeland security and are considered by crisis management authorities as a highly valid open source of information.
Connectivity revolutionizes mobile video surveillance
In 2016, mobility does not systematically get along with real-time transmission because the broadband wireless data network is still expensive. For a few cameras on board of vehicles and able to transit their streams to Urban Control Centers, the vast majority of bodycams are still only portable DVRs. The same for drones, the entry level of video drones is only recording. First Person Video modules that allow to remotely pilot the drone are reserved to high end models like the one used in the assault against terrorists in Saint-Denis. Hence, the security video sources are more and more numerous but few are available live. The majority is only available in delayed mode.
However, connectivity revolutionizes mobility. By allowing the live sharing of images, the 4G networks open wide new cooperations between mobile cameras and Operation Centers. We can anticipate that over the 5 next years, the 4G+ mobile data network not only will allow the operation of large numbers of mobile cameras but will represent a worthwhile substitute to the wired network to connect new stand-alone cameras. The wireless broadband network as the main connectivity solution for the whole range of video surveillance edge equipments, stand-alone, on-board, mobile, there we see the revolution of that mobile interconnected video, in uses that will be derived from it. Let’s analyse this new context.
Geolocation, redundance and interoperability redefine mobile and interconnected video surveillance
The new paradigm:
Cameras move with operators as they are worn or remotely operated Supervizor manages operators to cover blind spots or increase coverage of hot spots. They are selected by supervizors according to efficiency parameters like proximity, moving speed, availability. Operators on the field participate actively to security enforcement.
Operators bring cameras on identified events and threats to optimize situational awareness and decision making.
Operators cooperate with supervizor and communicate about the situation witnessed by their video feed
The superviser is not any more the spectator of the event but the coordinator of video coverage. He manages convergence of video operators on the hot spots. The reactive surveillance based on fixed cameras is doubled by an active surveillance able to adapt to the threat and the situation. Video coverage can be adapted and optimized, even predictive, opening new opportunities for successfull cooperation with predictive policing applications where statistical anticipation of the situations and hot spots location are taken into account for optimizing the placement of police forces.
IT infrastructures are the new pillars of video surveillance
Challenges of this revolution of mobile video surveillance are clear: we must exceed the limits that have slowed down effcacity of traditional video surveillance as we know it today. We can name them:
difficulties to manage multiplication of video feeds
difficulties to analyse video and correlate information from different sources
marginal role of supervisor in video annotation and indexing
difficult coordination with operators on the field
Hence we must, in order to legitimate a technological response, ensure the resilience of a broadband wireless network able to link together all the elements of his system, develop a server infrastructure able to manage recording and analysis of all video feeds, develop analytic algorithms for image processing and data correlation that will enable leveraging the big data to bring a superior level of situational awareness.
100% BIG DATA 100% STORAGE CENTRIC 100% ANALYTICS 100% WIRELESS
Conclusion: toward private-public security joint production
Such a system is at the forefront to anticipate and manage crisis with a reactivity and a scalability that far exceeds the biggest current video surveillance systems.
The private security companies, if the law permits it, can play a substantial role by equiping their intervention forces with authorities compliant devices. This way, we will see the massive deployment of sensors comparable to the smartphones, to which will be added the drones cams, the domestic cams, the smart glasses, the private dashcams and police cars cameras and last but not least the robot cameras that will, in the next 15 years significantly increase the list of potential evidence witnesses. The joint production of security between governments and private companies is thus potentially doubled by a possible uberization of homeland security, through a « crowd surveillance » effect. Moreover, on Europol site it can be read :
It is not enough for the police alone to fight crime. Reducing the risk and fear of crime is a task for the police and the community working together. To achieve our aim of making Europe safer, we need citizens who live here, work here and visit here to do their part in making life difficult for criminals. These pages contain basic information to help you contribute to the fight against crime by protecting yourself and your property. Follow these tips to prevent yourself from becoming a victim of crime…
The challenge for governments, assuming the global threat of terrorism, is thus in the necessary cooperation of public and private forces and in the delivery of infrastructures and services able to anticipate on the heavy industry and economic trends described here. Nevertheless, those challenges in terms of capacity and security of information systems must remain framed in a « security by design » approach which only will guarantee video streams authenticity but most importantly individual liberties.