SIGN_INREGISTRATION

Benefits of

Artificial Intelligence

SquidX is a new operating system utilizing AI to seamlessly integrate user’s digital devices into one interface.


Unity

Every digital device has an OS consisting of a kernel, drivers, libraries, utilities, etc. Kernels range in function from simple “On/Off” commands to complex operations. SquidX combines basic and compound kernels into one hybrid design system

Efficiency

SquidX engages its AI to build a file system for every user based upon the user’s devices and habits. Constructing a personal support and development library tailored towards the user’s routines. The more a user interacts with the interface, the more efficient it becomes.

Security

Not only does a singular interface free users from remembering multiple passwords, it protects their devices as well. Instead of giving a virus multiple entry points, there is only one door in. A door constantly cloaked and monitored by AI.

We are developing a new dynamic and intuitive Operating System (OS) capitalizing on the learning and adaptive capabilities of Artificial Intelligence (AI). Today’s world is driven by smarter and faster digital devices. Everything from refrigerators to planes has its own interface. Our OS is going to integrate all these disparate vehicles, gadgets, electronics and appliances under a singular interface that is unified, efficient and secure.



Better Performance

AI constantly runs multidisciplinary functions in the background, assuring commands remain consistent across multiple platforms, tasks are automated and user’s needs are anticipated. Old and new devices are constantly analyzed to deliver the best implementation of user preferences. AI continually tests, adapts or upgrades as necessary. Ensuring users of an interface that anticipates their needs to improve device functionality.

More Secure

The best way for a user to protect against a virus is to have as few doors open to them as possible. The more interfaces a user interacts with correlates to the number of doors for a virus to enter. But our interface provides just a single entrance. One that is continually monitored for potential risks, both in the real and digital realms. Users are alerted to either.

Several mainstream digital devices, like cell phones and headsets, can collect and process limited environmental data. We think there should be a device that not only gathers a much broader array of information but look good too. Our concept is AR glasses that garner locational lidar, photometry, thermal along with other types of input. Technology needs to be functional, fashionable and comfortable. Who wants to walk around in hot cumbersome helmets that obscure the real world, limit mobility or make you nauseous…no one.Currently there is no comprehensive way to process and send all this data to multiple output devices. That is until the arrival of SquidX. Our groundbreaking code utilizes AI to both translate environmental data, and to power our Universal Interface. Once data has been processed, it can be output to almost any AR or VR device.

We are developing several revolutionary tools built with our AI technology including a 3D camera and The World Builder. Our software turns input devices into 3D Cameras capable of collecting and translating locational data in real-time. The World Builder creates virtual three-dimensional computer simulations of real-world experiences and environments. Providing a way to see and experience remote locations, demonstrations or interactions in real time or on demand. Perfect for manufacturing, imaging, entertainment, or training purposes. The possibilities are truly endless.




Basic Process Overview and Development

The following diagrams illustrate our AI assisted collection and translation of data from an input source to an output device. They happen to be for our 3D camera, but they illustrate our general methodology of data integration. We use a modular structure comprised of individual sequential blocks linked together through AI. This allows specialized teams to develop each block, focusing on the part, instead of on the whole. Giving designers license to concentrate solely on what they do best. And then relying on AI to generate top-level processes that enable each block to extract and transform data from a previous block. Thereby assuring no one individual breaks or breaches the system. AI then catalogues all blocks, processes and data into a proprietary library to be accessed in the future.

The power of our pipeline is multi-fold. First, all assets are interconnected. So as the data library grows and expands, our data driven tools will have more resources and functionality. Tool development can be visualized and monitored at the micro or macro level. New tools can be conceived by adding or subtracting existing assets like Legos®. All in all, it is a more efficient and imaginative way to approach research, development and implementation of new concepts.





Data Input and Pre-process

SquidX combines locational camera and LiDAR data to build realistic 3D replicas of real-world environments. The raw data necessary to do so is accessible through most cell phones. Today’s phones come equipped with both camera and LiDAR (Light Detection and Ranging) technology. LiDAR, a remote sensing method, uses laser light to measure distance to produce finely detailed object replication. Right now,it’s used mainly to enhance photography. But SquidX extracts LiDAR, processing it to point cloud data, which yields world-space positioning, scaling and location information. Camera and point cloud data are combined in order to generate a virtual facsimile of the entire area surrounding the phone, not just the sector the camera is aimed at, in 360° world space. In effect, turning your phone into a 3D camera. The computer-generated replica is broadcast to a remote AR/VR input device where an end user can “walk around” and explore the real-world location virtually.

1

Data Input Processing

After being collected, the raw data needs get processed. The first step is to clean and analyze the LiDAR data which has been converted into point cloud data. SquidX has developed a tool combining base algorithms with specific requirements to construct depth maps that allow for clustering. Both point data and camera data help us begin the process of organizing and grouping. AI analyzes raw camera data such as focal length, f-stop, etc., then combines it with the point data, which is mainly world space info from the cell phone. We begin grouping common areas together to create similar depth-map values for identifying surfaces that share the same topology. We’re using multi-layered files comprised mainly of a base of depth-map positioning in world-space, layered with environmental data generated from AI. AI then normalizes all processed data into groups of surfaces

2

Multi-Structure Setup

The next step is to organize all incoming data and prep it to combine into a multi-node system. SquidX’s AI evaluates camera and point data to begin the process of group-surface auto generation. It utilizes ML to compare incoming data with learned surfaces contained within the procedural library, adjusting relative to the point array coming from the export device. To further pre-process points, AI accesses color visualization to remove noise, outliers, and any unwanted artifacts. Color visualization is also used to group and mask various layers during file creation. For optimization and to keep processing to a minimal level, points don’t get converted into mesh objects until a multi-layer file node is created. Then basic point cloud to mesh conversion tools like: Marching Cubes Algorithm; Poisson Surface Reconstruction; Alpha Shapes, are used to convert points into mesh objects. Preparing the node for the next process.

3

Location-Based Data

Multi-layered nodes are imported into an Integrated Development Environment or Game Engine (IDE). Here, SquidX rebuilds data collected by an export device, from here on referred to as a 3D Camera, into its digital twin. A twin that not only recreates surface geometry but also recreates locational lighting and scale. Scale and distortion are important components often lacking from most VR experiences, which is why VR headaches are so common. Our AI gathers assets needed for configuring multiple nodes and utilities, bridges gaps between missing attributes or functionality and to build photorealistic computer-generated replicas, hence the name, “World Builder”. The tool is integrated without need for any manual adjustments. End users customize experiences to discover more of remote locations. The World Builder is accessible to several input devices via SquidX’s universal interface.

4

Input Device

SquidX’s universal interface allows users to choose whatever AR or VR input device they have/like, including their cell phones. But we suggest that in order to get the best results from our groundbreaking 3D Camera and World Builder technology and to truly unlock the full power of our AI assisted worlds, that users employ the input devices we’re optimizing our tools for. They include the Occulus 3; Apple Vision; and Magic Leap. We feel as though these options offer the best potential for delivering our AI driven simulated environments. Our software provides true photorealistic virtual settings for work, education, training, healthcare, communications, gaming, etc. and we want you to experience it to its fullest. Our AI assisted interface also helps to get you get the most from your devices by adapting to your usage and anticipating your needs.

5