https://github.com/combra-lab/combra_loihi combra_loihi combra_loihi is a neuromorphic computing library for Computational Astrocyence developed specifically for Intel's Loihi neuromorphic processor. The library is developed by [Computational Brain Lab](http://combra.cs.rutgers.edu/) (ComBra) at Rutgers University. Version 0.1 (11/2018) Prerequisites: - python 3.5.2 - NxSDK 0.7 For more information, please go to [combra_loihi WiKi](https://github.com/combra-lab/combra_loihi/wiki) Related Publication Guangzhi Tang, Ioannis E Polykretis, Vladimir A Ivanov, Arpit Shah, Konstantinos P Michmizos. "Introducing astrocytes on a neuromorphic processor: Synchronization, local plasticity and edge of chaos." Neuro-inspired Computational Elements Workshop (NICE 2019), Albany, NY, USA. [pdf](https://arxiv.org/pdf/1907.01620.pdf) ==== https://github.com/combra-lab/spiking-ddpg-mapless-navigation Spiking Neural Network for Mapless Navigation This package is the PyTorch implementation of the Spiking Deep Deterministic Policy Gradient (SDDPG) framework. The hybrid framework trains a spiking neural network (SNN) for energy-efficient mapless navigation on Intel's Loihi neuromorphic processor. The following figure shows an overview of the proposed method: [overview of method](https://github.com/combra-lab/spiking-ddpg-mapless-navigation/blob/master/ov...) The paper has been accepted at IROS 2020. The arXiv preprint is available [here](https://arxiv.org/abs/2003.01157). New: We have created a new GitHub repo to demonstrate the online runtime interaction with Loihi. If you are interested in using Loihi for real-time robot control, please [check it out](https://github.com/michaelgzt/loihi-control-loop-demo). Citation Guangzhi Tang, Neelesh Kumar, and Konstantinos P. Michmizos. "Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware." 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020. @inproceedings { tang2020reinforcement , title = { Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware } , author = { Tang, Guangzhi and Kumar, Neelesh and Michmizos, Konstantinos P } , booktitle = { 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) } , pages = { 1--8 } , year = { 2020 } , organization = { IEEE } } Software Installation 1. Basic Requirements - Ubuntu 16.04 - Python 3.5.2 - ROS Kinetic (with Gazebo 7.0) - PyTorch 1.2 (with CUDA 10.0 and tensorboard 2.1) - NxSDK 0.9 ROS Kinetic is not compatible with Python 3 by default, if you have issues with using Python 3 with ROS, please follow this [link](https://medium.com/@beta_b0t/how-to-setup-ros-with-python-3-44a69ca36674) to resolve them. We use the default Python 2 environment to execute roslaunch and rosrun. A CUDA enabled GPU is not required but preferred for training within the SDDPG framework. The results in the paper are generated from models trained using both Nvidia Tesla K40c and Nvidia GeForce RTX 2080Ti. Intel's neuromorphic library NxSDK is only required for SNN deployment on Loihi. If you are interested in deploying the trained SNN on Loihi, please contact the [Intel Neuromorphic Lab](https://www.intel.com/content/www/us/en/research/neuromorphic-community.html). We have provided the requirements.txt for the python environment without NxSDK. In addition, we recommend setting up the environment using [virtualenv](https://pypi.org/project/virtualenv/). 2. Simulation Setup The simulation environment simulates a Turtlebot2 robot with a 360 degree LiDAR in the Gazebo simulator. Turtlebot2 dependency can be installed using: sudo apt-get install ros-kinetic-turtlebot- * We use the Hokuyo LiDAR model in the simulation and set the parameters to be the same as the RPLIDAR S1. LiDAR dependency can be installed using: sudo apt-get install ros-kinetic-urg-node Download the project and compile the catkin workspace: cd < Dir
/ < Project Name
/ros/catkin_ws catkin_make Add the following line to your ~/.bashrc in order for ROS environment to setup properly: source < Dir
/ < Project Name
/ros/catkin_ws/devel/setup.bash export TURTLEBOT_3D_SENSOR= " hokuyo " Run source ~/.bashrc afterward and test the environment setup by running (use Python 2 environment): roslaunch turtlebot_lidar turtlebot_world.launch You should able to see the Turtlebot2 with a LiDAR on the top. 3. Real-world Setup We install the [RPLIDAR S1](https://www.slamtec.com/en/Lidar/S1) on the center of the top level of Turtlebot2. To use the LiDAR with ROS, you need to download and install the rplidar_ros library from [here](https://github.com/robopeak/rplidar_ros) on the laptop controlling Turtlebot2. After installing the library, you need to add the LiDAR to the tf tree. This can be done by adding a tf publisher node in minimal.launch from turtlebot_bringup package: < node name = " base2laser " pkg = " tf " type = " static_transform_publisher " args = " 0 0 0 0 0 1 0 /base_link /laser 50 "
Test the setup by running (use Python 2 environment): roslaunch turtlebot_bringup minimal.launch and roslaunch rplidar_ros rplidar_s1.launch in separate terminals on the laptop controlling Turtlebot2. Example Usage 1. Training SDDPG To train the SDDPG, you need to first launch the training world including 4 different environments (use Python 2 environment and absolute path for <Dir>): roslaunch turtlebot_lidar turtlebot_world.launch world_file:= < Dir
/ < Project Name
/ros/worlds/training_worlds.world Then, run the laserscan_simple ros node in a separate terminal to sample laser scan data every 10 degrees (use Python 2 environment): rosrun simple_laserscan laserscan_simple Now, we have all ros prerequisites for training. Execute the following commands to start the training in a new terminal (use Python 3 environment): source < Dir to Python 3 Virtual Env
/bin/activate cd < Dir
/ < Project Name
/training/train_spiking_ddpg python train_sddpg.py --cuda 1 --step 5 This will automatically train 1000 episodes in the training environments and save the trained parameters every 10k steps. Intermediate training results are also saved through tensorboard. If you want to perform the training on CPU, you can set --cuda to 0. You can also train for different inference timesteps of SNN by setting --step to the desired number. In addition, we also have the state-of-the-art DDPG implementation that trains a non-spiking deep actor network for mapless navigation. If you want to train the DDPG network, run the following commands to start the training in a new terminal (use Python 3 environment): source < Dir to Python 3 Virtual Env
/bin/activate cd < Dir
/ < Project Name
/training/train_ddpg python train_ddpg.py --cuda 1 2. Evaluate in simulated environment To evaluate the trained Spiking Actor Network (SAN) in Gazebo, you need to first launch the evaluation world (use Python 2 environment and absolute path for <Dir>): roslaunch turtlebot_lidar turtlebot_world.launch world_file:= < Dir
/ < Project Name
/ros/worlds/evaluation_world.world Then, run the laserscan_simple ros node in a separate terminal to sample laser scan data every 10 degrees (use Python 2 environment): rosrun simple_laserscan laserscan_simple Now, we have all ros prerequisites for evaluation. Run the following commands to start the evaluation in a new terminal (use Python 3 environment): source < Dir to Python 3 Virtual Env
/bin/activate cd < Dir
/ < Project Name
/evaluation/eval_random_simulation python run_sddpg_eval.py --save 0 --cuda 1 --step 5 This will automatically navigate the robot for 200 randomly generate start and goal positions. The full evaluation will cost more than 2 hours. If you want to perform the evaluation on CPU, you can set --cuda to 0. You can also evaluate for different inference timesteps of SNN by setting --step to the desired number. To deploy the trained SAN on Loihi and evaluate in Gazebo, you need to have the Loihi hardware. If you have the Kapoho Bay USB chipset, run the following commands to start the evaluation (use Python 3 environment): source < Dir to Python 3 Virtual Env
/bin/activate cd < Dir
/ < Project Name
/evaluation/eval_random_simulation_loihi KAPOHOBAY=1 python run_sddpg_loihi_eval.py --save 0 --step 5 You can also evaluate for different inference timesteps of SNN by setting --step to the desired number. In addition, you also need to change the epoch value in the <Project Name>/evaluation/loihi_network/snip/encoder.c file corresponding to the inference timesteps. For both evaluations, you can set --save to 1 to save the robot routes and time. These running histories are then used to generate the results shown in the paper. Run the following commands to evaluate the history by yourself (use Python 3 environment): source < Dir to Python 3 Virtual Env
/bin/activate cd < Dir
/ < Project Name
/evaluation/result_analyze python generate_results.py You should be able to get the following results for evaluating the SAN on GPU with T=5: sddpg_bw_5 random simulation results: Success: 198 Collision: 2 Overtime: 0 Average Path Distance of Success Routes: 18.539 m Average Path Time of Success Routes: 42.519 s [overview of method](https://github.com/combra-lab/spiking-ddpg-mapless-navigation/blob/master/ev...) with red dot as goal positions, blue dot as start positions, and red cross as collision positions. 3. Evaluate in real-world environment Our implementation of real-world evaluate relies on the amcl to localize the robot and generate relative goal positions. Therefore, to evaluate the trained SNN in real-world environment, you have to first generate a map of the environment using GMapping (use Python 2 environment): roslaunch turtlebot_lidar gmapping_lplidar_demo.launch Then, you can use the saved map to localize the robot's pose (use Python 2 environment): roslaunch turtlebot_lidar amcl_lplidar_demo.launch map_file:= < Dir to map
You can view the robot navigation using rviz by running in a separate terminal (use Python 2 environment): roslaunch turtlebot_rviz_launchers view_navigation.launch After verifying that the robot can correctly localize itself in the environment, you can start to evaluate the trained SNN. Here, we only support the evaluation on Loihi. To deploy the trained SNN on Loihi, you need to have the Loihi hardware. If you have the Kapoho Bay USB chipset, run the following commands to start the evaluation (use Python 3 environment): source < Dir to Python 3 Virtual Env
/bin/activate cd < Dir
/ < Project Name
/evaluation/eval_real_world KAPOHOBAY=1 python run_sddpg_loihi_eval_rw.py For your own environment, remember to change the GOAL_LIST in the evaluation script to the appropriate goal positions for the environment. Acknowledgment This work is supported by Intel's Neuromorphic Research Community Grant Award.
Unless and until you start tying these type of technocratic specialist posts of yours to crypto-anarchism then I suggest you post them on a more appropriate list. " CODERPUNKS " for example. Or maybe ' Mens Rights " or " Mens liberation " - you being a Coder Man. On Friday, 1 October 2021, 09:43:58 am AEST, coderman <coderman@protonmail.com> wrote: https://github.com/combra-lab/combra_loihi combra_loihi combra_loihi is a neuromorphic computing library for Computational Astrocyence developed specifically for Intel's Loihi neuromorphic processor. The library is developed by Computational Brain Lab (ComBra) at Rutgers University. Version 0.1 (11/2018) Prerequisites: * python 3.5.2 * NxSDK 0.7 For more information, please go to combra_loihi WiKi Related Publication Guangzhi Tang, Ioannis E Polykretis, Vladimir A Ivanov, Arpit Shah, Konstantinos P Michmizos. "Introducing astrocytes on a neuromorphic processor: Synchronization, local plasticity and edge of chaos." Neuro-inspired Computational Elements Workshop (NICE 2019), Albany, NY, USA. pdf ==== https://github.com/combra-lab/spiking-ddpg-mapless-navigation Spiking Neural Network for Mapless Navigation This package is the PyTorch implementation of the Spiking Deep Deterministic Policy Gradient (SDDPG) framework. The hybrid framework trains a spiking neural network (SNN) for energy-efficient mapless navigation on Intel's Loihi neuromorphic processor. The following figure shows an overview of the proposed method: The paper has been accepted at IROS 2020. The arXiv preprint is available here. New: We have created a new GitHub repo to demonstrate the online runtime interaction with Loihi. If you are interested in using Loihi for real-time robot control, please check it out. Citation Guangzhi Tang, Neelesh Kumar, and Konstantinos P. Michmizos. "Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware." 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020. @inproceedings{tang2020reinforcement, title={Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware}, author={Tang, Guangzhi and Kumar, Neelesh and Michmizos, Konstantinos P}, booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages={1--8}, year={2020}, organization={IEEE} } Software Installation 1. Basic Requirements * Ubuntu 16.04 * Python 3.5.2 * ROS Kinetic (with Gazebo 7.0) * PyTorch 1.2 (with CUDA 10.0 and tensorboard 2.1) * NxSDK 0.9 ROS Kinetic is not compatible with Python 3 by default, if you have issues with using Python 3 with ROS, please follow this link to resolve them. We use the default Python 2 environment to execute roslaunch and rosrun. A CUDA enabled GPU is not required but preferred for training within the SDDPG framework. The results in the paper are generated from models trained using both Nvidia Tesla K40c and Nvidia GeForce RTX 2080Ti. Intel's neuromorphic library NxSDK is only required for SNN deployment on Loihi. If you are interested in deploying the trained SNN on Loihi, please contact the Intel Neuromorphic Lab. We have provided the requirements.txt for the python environment without NxSDK. In addition, we recommend setting up the environment using virtualenv. 2. Simulation Setup The simulation environment simulates a Turtlebot2 robot with a 360 degree LiDAR in the Gazebo simulator. Turtlebot2 dependency can be installed using: sudo apt-get install ros-kinetic-turtlebot-* We use the Hokuyo LiDAR model in the simulation and set the parameters to be the same as the RPLIDAR S1. LiDAR dependency can be installed using: sudo apt-get install ros-kinetic-urg-node Download the project and compile the catkin workspace: cd <Dir>/<Project Name>/ros/catkin_ws catkin_make Add the following line to your ~/.bashrc in order for ROS environment to setup properly: source <Dir>/<Project Name>/ros/catkin_ws/devel/setup.bash export TURTLEBOT_3D_SENSOR="hokuyo" Run source ~/.bashrc afterward and test the environment setup by running (use Python 2 environment): roslaunch turtlebot_lidar turtlebot_world.launch You should able to see the Turtlebot2 with a LiDAR on the top. 3. Real-world Setup We install the RPLIDAR S1 on the center of the top level of Turtlebot2. To use the LiDAR with ROS, you need to download and install the rplidar_ros library from here on the laptop controlling Turtlebot2. After installing the library, you need to add the LiDAR to the tf tree. This can be done by adding a tf publisher node in minimal.launch from turtlebot_bringup package: <node name="base2laser" pkg="tf" type="static_transform_publisher" args="0 0 0 0 0 1 0 /base_link /laser 50"> Test the setup by running (use Python 2 environment): roslaunch turtlebot_bringup minimal.launch and roslaunch rplidar_ros rplidar_s1.launch in separate terminals on the laptop controlling Turtlebot2. Example Usage 1. Training SDDPG To train the SDDPG, you need to first launch the training world including 4 different environments (use Python 2 environment and absolute path for <Dir>): roslaunch turtlebot_lidar turtlebot_world.launch world_file:=<Dir>/<Project Name>/ros/worlds/training_worlds.world Then, run the laserscan_simple ros node in a separate terminal to sample laser scan data every 10 degrees (use Python 2 environment): rosrun simple_laserscan laserscan_simple Now, we have all ros prerequisites for training. Execute the following commands to start the training in a new terminal (use Python 3 environment): source <Dir to Python 3 Virtual Env>/bin/activate cd <Dir>/<Project Name>/training/train_spiking_ddpg python train_sddpg.py --cuda 1 --step 5 This will automatically train 1000 episodes in the training environments and save the trained parameters every 10k steps. Intermediate training results are also saved through tensorboard. If you want to perform the training on CPU, you can set --cuda to 0. You can also train for different inference timesteps of SNN by setting --step to the desired number. In addition, we also have the state-of-the-art DDPG implementation that trains a non-spiking deep actor network for mapless navigation. If you want to train the DDPG network, run the following commands to start the training in a new terminal (use Python 3 environment): source <Dir to Python 3 Virtual Env>/bin/activate cd <Dir>/<Project Name>/training/train_ddpg python train_ddpg.py --cuda 1 2. Evaluate in simulated environment To evaluate the trained Spiking Actor Network (SAN) in Gazebo, you need to first launch the evaluation world (use Python 2 environment and absolute path for <Dir>): roslaunch turtlebot_lidar turtlebot_world.launch world_file:=<Dir>/<Project Name>/ros/worlds/evaluation_world.world Then, run the laserscan_simple ros node in a separate terminal to sample laser scan data every 10 degrees (use Python 2 environment): rosrun simple_laserscan laserscan_simple Now, we have all ros prerequisites for evaluation. Run the following commands to start the evaluation in a new terminal (use Python 3 environment): source <Dir to Python 3 Virtual Env>/bin/activate cd <Dir>/<Project Name>/evaluation/eval_random_simulation python run_sddpg_eval.py --save 0 --cuda 1 --step 5 This will automatically navigate the robot for 200 randomly generate start and goal positions. The full evaluation will cost more than 2 hours. If you want to perform the evaluation on CPU, you can set --cuda to 0. You can also evaluate for different inference timesteps of SNN by setting --step to the desired number. To deploy the trained SAN on Loihi and evaluate in Gazebo, you need to have the Loihi hardware. If you have the Kapoho Bay USB chipset, run the following commands to start the evaluation (use Python 3 environment): source <Dir to Python 3 Virtual Env>/bin/activate cd <Dir>/<Project Name>/evaluation/eval_random_simulation_loihi KAPOHOBAY=1 python run_sddpg_loihi_eval.py --save 0 --step 5 You can also evaluate for different inference timesteps of SNN by setting --step to the desired number. In addition, you also need to change the epoch value in the <Project Name>/evaluation/loihi_network/snip/encoder.c file corresponding to the inference timesteps. For both evaluations, you can set --save to 1 to save the robot routes and time. These running histories are then used to generate the results shown in the paper. Run the following commands to evaluate the history by yourself (use Python 3 environment): source <Dir to Python 3 Virtual Env>/bin/activate cd <Dir>/<Project Name>/evaluation/result_analyze python generate_results.py You should be able to get the following results for evaluating the SAN on GPU with T=5: sddpg_bw_5 random simulation results: Success: 198 Collision: 2 Overtime: 0 Average Path Distance of Success Routes: 18.539 m Average Path Time of Success Routes: 42.519 s with red dot as goal positions, blue dot as start positions, and red cross as collision positions. 3. Evaluate in real-world environment Our implementation of real-world evaluate relies on the amcl to localize the robot and generate relative goal positions. Therefore, to evaluate the trained SNN in real-world environment, you have to first generate a map of the environment using GMapping (use Python 2 environment): roslaunch turtlebot_lidar gmapping_lplidar_demo.launch Then, you can use the saved map to localize the robot's pose (use Python 2 environment): roslaunch turtlebot_lidar amcl_lplidar_demo.launch map_file:=<Dir to map> You can view the robot navigation using rviz by running in a separate terminal (use Python 2 environment): roslaunch turtlebot_rviz_launchers view_navigation.launch After verifying that the robot can correctly localize itself in the environment, you can start to evaluate the trained SNN. Here, we only support the evaluation on Loihi. To deploy the trained SNN on Loihi, you need to have the Loihi hardware. If you have the Kapoho Bay USB chipset, run the following commands to start the evaluation (use Python 3 environment): source <Dir to Python 3 Virtual Env>/bin/activate cd <Dir>/<Project Name>/evaluation/eval_real_world KAPOHOBAY=1 python run_sddpg_loihi_eval_rw.py For your own environment, remember to change the GOAL_LIST in the evaluation script to the appropriate goal positions for the environment. Acknowledgment This work is supported by Intel's Neuromorphic Research Community Grant Award.
I think what PR means is, where the heck do hackers get neuromorphic processors to use this code??? Or is there some other use it has?
In other news, you can teach an AI to do cryptography nowadays, and that means the landscape is different in a way I don't understand in the slightest. Also wtg for sharing open source advanced technology. Obviously most people are either still enslaved by govcorp or carefully hiding from govcorp that we are developing communities of people all of whom know how to make a robot army from scratch.
hello Karl, a quick reply sans signature (embedded HTML) for convenience ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Friday, October 1, 2021 9:38 AM, Karl <gmkarl@gmail.com> wrote:
I think what PR means is, where the heck do hackers get neuromorphic processors to use this code??? Or is there some other use it has?
Intel is releasing the new hardware, but it's not clear to what groups, or at what cost. A relevant article: https://www.anandtech.com/show/16960/intel-loihi-2-intel-4nm-4 Intel Rolls Out New Loihi 2 Neuromorphic Chip: Built on Early Intel 4 Process by [Dr. Ian Cutress](https://www.anandtech.com/Author/140) on September 30, 2021 11:00 AM EST We’ve been keeping light tabs on Intel’s Neuromorphic efforts ever since it launched its first dedicated 14nm silicon for Neuromorphic Computing, called Loihi, [back in early 2018](https://www.anandtech.com/show/12261/intel-at-ces-2018-keynote-live-blog). In an interview with [Intel Lab’s Director Dr. Richard Uhlig](https://www.anandtech.com/show/16515/the-intel-moonshot-division-an-intervie...) back in March 2021, I asked about the development of the hardware, and when we might see a second generation. Today is that day, and the group is announcing Loihi 2, a substantial upgrade over the first generation that addresses a lot of the low-hanging fruit from the first design. What is perhaps just as interesting is the process node used: Intel is communicating that Loihi 2 is being built, in silicon today, using a pre-production version of Intel’s first EUV process node, Intel 4. Neuromorphic Computing for Intel By creating an architecture that at its core is modeled like a brain, the idea is that having millions of neurons and synapses will lead to compute tasks with the unique power/performance benefits in specific tasks that brains are designed to do. It’s a long term potential commercial product for Intel, however the task for the team has been to develop both the technology and the software to discover and accelerate tasks that are suited to neuron-type computing. https://images.anandtech.com/doci/16960/Loihi%202%20Lava%20Launch%202021-Sep... The Neuromorphic Lab at Intel was actually borne out an acquisition of Fulcrum Microsystems in 2011. At the time, the Fulcrum team was an asynchronous computing group working on network switches. That technology was moved up to the networking group inside Intel, and the research division turned its attention to other uses of asynchronous compute, and landed on Neuromorphic. At that time, research into this sort of neuromorphic computing architecture for actual workloads was fairly nascent – while the field wad been around [since the late 1980s](http://www.carvermead.caltech.edu/research.html), dedicated research-built hardware didn’t really exist until the early 2010s. The [Human Brain Project](https://www.humanbrainproject.eu/en/), a 10 year research project funded by the European Union to look into this field, was only established in 2013, and out of that is the [SpiNNaker](http://apt.cs.manchester.ac.uk/projects/SpiNNaker/) system in 2019, with a million chips, a billion neurons, for 100 kW of active power. https://images.anandtech.com/doci/16960/Loihi%202%20Lava%20Launch%202021-Sep... By comparison Intel’s first generation Loihi supports 131000 neurons per 60 mm2 chip, and 768 chips can be put together in a single Pohoiki Springs system with 100 million neurons for only 300 watts. In Intel’s own marketing, they’ve described this as the equivalent to a hamster. The new Loihi 2 chip, at a high level, uses 31 mm2 per chip for a million neurons, effectively increasing density 15x, however the development goes beyond raw numbers. Loihi 2 The Loihi 2 chip at a high level might look similar: 128 neuromorphic cores, but now each core has 8x more neurons and synapses. Each of those 128 cores has 192 KB of flexible memory, compared to previously where it was fixed per core at runtime, and each neuron can be allocated up to 4096 states depending on the model, whereas the previous limit was only 24. The Neuron model can also now be fully programmable, akin to an FPGA, allowing for greater flexibility. https://images.anandtech.com/doci/16960/Loihi%202%20Lava%20Launch%202021-Sep... Traditionally neurons and spiked networks deliver data in a binary event, which is what Loihi v1 did. With Loihi 2, those events can be graded with a 32-bit payload, offering deeper flexibility for on-chip compute. Those events can now be monitored in real time with new development/debug features on chip, rather than pause/read/play. In combination, this also allows for better control when dynamically changing compute workloads, such as fan-out compression, weight scaling, convolutions, and broadcasts. https://images.anandtech.com/doci/16960/Loihi%202%20Lava%20Launch%202021-Sep... Perhaps one of the biggest improvements is in connectivity. The first generation used a custom asynchronous protocol to create a large 2D network of neurons, while Loihi 2 can be configured to use a variety of protocols based on need, but also in a 3D network. We were told that Loihi 2 isn’t just a single chip, but it will be a family of chips with the same neuron architecture but a variety of different connectivity options based on specific use cases. This can be used in conjunction with onboard message compression accelerators to get an effective 10x increase in chip-to-chip bandwidth. https://images.anandtech.com/doci/16960/Loihi%202%20Lava%20Launch%202021-Sep... This also extends to external Loihi connectivity to more conventional computing, which was previously FPGA mediated – now Loihi 2 supports 10G Ethernet, GPIO, and SPI. This should allow for easier integration without the need for custom systems, such as creating disaggregated Loihi 2 compute clusters. Built on Intel 4 We were surprised to hear that Loihi 2 is built on a pre-production version of Intel 4 process. We are still a time away from Loihi 2 being a portion of Intel’s revenue, and the Neuromorphic team knows as much, but it turns out that the chip is perhaps an ideal candidate to help bring up a new process. At 31 mm2, the size means that even if the yield needs to improve, a single wafer can offer more working chips than testing with a bigger die size. As the team does post-silicon testing for voltage/frequency/functionality, they can cycle back quicker to Intel’s Technology Development team. We confirmed that there is actual silicon in the lab, and in fact the hardware will be available today through Intel’s DevCloud, direct to metal, without any emulation. https://images.anandtech.com/doci/16960/Loihi%202%20Lava%20Launch%202021-Sep... Normally with new process nodes, you need a customer with a small silicon die size to help iterate through the potential roadblocks in bringing a process up to a full-scale ramp and production. Intel’s foundry competitors normally do this with customers that have smartphone-sized chips, and the benefits for the customer usually means first to hardware or perhaps some sort of initial discount (although, perhaps not in today’s climate). Intel has previously struggled on that front, as it only has its own silicon to use as a test vehicle. The Neuromorphic team said that it was actually a good fit, given that neuromorphic hardware requires the high density and low static power afforded by the leading edge process nodes. The 128-core design also means that it has a consistent repeating unit, allowing the process team to look at regularity and consistency in production. Also, given that Loihi still remains a research project for now, there’s no serious expectation to drive that product to market in a given window, which perhaps a big customer might need. Does this mean Intel’s 4 is ready for production? Not quite, but it does indicate that progress is being made. A number of Loihi 2’s listed benchmarks did have the caveat of ‘expected given simulated hardware results’, although a few others were done on real silicon, and the company says it has real silicon to deploy in the cloud today. Intel 4 is Intel’s first process node that Extreme Ultra Violet (EUV) lithography, and Intel will be the last major semi manufacturer to initiate an EUV process for productization. But we’re still a way off – back at [Intel’s Accelerated](https://www.anandtech.com/show/16823/intel-accelerated-offensive-process-roa...) event, EUV and Intel 4 isn’t really isn’t expected to ramp production until the second half of 2022. https://images.anandtech.com/doci/16960/AnandTechRoadmaps3.png To wrap up, from Intel’s announcement, we are able to look at transistor density. At 2.3 billion transistors in 31 mm2, that would put the density at 71.2 million per mm2, which is only a third of what we are expecting. Estimates based on Intel’s previous announcements would put Intel 4 at around 200 MTr/mm2. So why is Loihi 2 so low compared to that number? First is perhaps that it is a neuromorphic chip, and not a traditional logic design. The core has ~25 MB of SRAM total along with all the logic, which for a 31mm2 chip might be a good chunk of the die area. Also, Intel’s main idea with the neuromorphic chips is functionality first, performance second, and power third. So getting it working right is more important than getting it working fast, so there isn’t always a raw need for the highest density. Then there’s the fact that it’s still a development chip, and it allows Intel to refine its EUV process and test for precision lithography without having to worry as much about defects caused by dense transistor libraries. More to come, I’m sure. To add a final point, our briefing did speculate that the neuromorphic IP could potentially be made available through Intel’s Foundry Service IP offerings in the future. New Lava Software Framework Regardless of processing capability, one of the main building blocks for a Neuromorphic system is the type of compute, and perhaps how difficult it is to write software to take advantage of such an architecture. In a discussion with Intel’s Mike Davies, Director of Intel’s Neuromorphic Lab, we best described it that modern computing is akin to a polling architecture – every cycle it takes data and processes it. By contrast, Neuromorphic computing is an interrupt based architecture – it acts when data is ready. Neuromorphic computing is moreso time domain dependent than modern computing, and so both the concept of compute and the applications it can work on are almost orthogonal to traditional computing techniques. For example, while machine learning can be applied to neuromorphic computing in the form of Spiking Neural Networks (SNNs), traditional PyTorch and TensorFlow libraries aren’t built to enable SNNs. https://images.anandtech.com/doci/16960/Loihi%202%20Lava%20Launch%202021-Sep... Today, as part of the announcements, Intel is launching a new underlying software framework for the neuromorphic community called Lava. This is an open source framework, not under Intel’s control, but by the community. Intel has pushed a number of its early tools as part of the framework, and the idea is that over time a full software stack can be developed for everyone involved in Neuromorphic computing to use, regardless of the hardware (CPU, GPU, Neuromorphic chip). Lava is designed to be modular, composable, extensible, hierarchical, and open-source. This includes a low-level interface for mapping neural networks onto neuromorphic hardware, channel-based asynchronous message passing, and all libraries and features are exposed through Python. The software will be available for free use under BSD-3 and LGPL-2.1 at GitHub. Initial Systems The first version of Loihi 2 to deployed in Intel’s cloud services is Oheo Gulch, which looks like a PCIe add-in card using an FPGA to manage a lot of the IO, along with a backplane connector if needed. The 31 mm2 chip is BGA, and here we’re seeing one of Intel’s internal connectors for holding BGA chips onto a development board. https://images.anandtech.com/doci/16960/Loihi%202%20Lava%20Launch%202021-Sep... At a future date, Intel will produce a 4-inch by 4-inch version called Kapoho Point, with eight chips on board, designed to be stacked and integrated into a larger machine. With having such a small chip, I wonder if it’s not worthwhile building it with a USB controller on the silicon, or having a USB-to-Ethernet interface, and offering the hardware on USB sticks, akin to what Intel’s Movidius used to be distributed. We asked Intel about expanding the use of Loihi 2 out to a wider non-research/non-commercial focused audience to tinker and homebrew, however as this is still an Intel Labs project right now, one of the key elements for the team is the dedicated collaborations they have with partners to push the segment forward. So we’re going to have to wait at least another generation or more to see if any future Loihi systems end up being offered on Amazon. Loihi 2 should be availble for research partners to use from today as part of Intel's DevCloud. On-premises research/collaboration deployments are expected over the next 12-24 months.
On Sun, Oct 3, 2021, 1:33 PM coderman <coderman@protonmail.com> wrote:
hello Karl, a quick reply sans signature (embedded HTML) for convenience
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Friday, October 1, 2021 9:38 AM, Karl <gmkarl@gmail.com> wrote:
I think what PR means is, where the heck do hackers get neuromorphic processors to use this code??? Or is there some other use it has?
Intel is releasing the new hardware, but it's not clear to what groups, or at what cost.
Thanks
participants (3)
-
coderman
-
Karl
-
professor rat