[ot] painful research joke: mind control of rat over wifi

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Sat Feb 25 22:29:01 PST 2023


The painful joke is because members of this list have likely
experienced a more conventional kind of mind control (that involving
abuse and messaging and possibly drugs and other things including
sometimes implants) for “ratting” of powerful criminals. The messaging
can at times include stories from the abuser that an implant has been
placed in their brain, and/or that the abuser controls them with their
mind alone and the victim is helpless, and/or that the victim is a
“worthless rat”. Computer viruses and spy devices are called implants
now, and some people have found biological implants as well I
understand.

In this normal recent research paper, wireless devices are attached to
the heads of a human and a biological rat such that the rat is forced
to do what the human directs.

I imagine you could also wire it the other way around, so that a human
has to do what a rat wants. Studies may be in progress.

This paper came up in my machine learning feed. The twitter account
usually tweets popular mainstream data science papers. Not sure why
they tweeted this one.

I expect that others would agree that, given experience of being mind
controlled, experiments like this are horrifying and highly unethical,
even though the subject is a tiny rat. Having your brain forced to
ignore everything that it is there to do for you, in order to attend
to moment-to-moment commands coming from invisible stimulation is
world-ending hellish torture that makes painful and confused lifelong
activism, uncommunicable complex trauma, and tragic loss of beautiful
independent spirits and life potential.

Victims need studies like these to somehow represent respect and
inclusion of the control subject.

Still, I strongly celebrate that research like this is producing
mainstream discourse and knowledge, and find it quite necessary. It is
one of the paths along which therapies for detection and recovery can
develop.

I apologize that this is not actual mkultra recovery research, which
likely also exists. This paper, the sharing of which unfortunately
supports harmful messaging, is still much easier for me to think of
and look at than more relevant ones. The paste below is poor quality
partly because I am not sure of actual utility of the paper.

https://twitter.com/hardmaru/status/1628963052560482305

https://www.nature.com/articles/s41598-018-36885-0

Human Mind Control of Rat
Cyborg’s Continuous Locomotion
with Wireless Brain-to-Brain
Interface
shaomin Zhang1,3,4, Sheng Yuan1,3,4, Lipeng Huang2, Xiaoxiang
Zheng1,3,4, Zhaohui Wu2,
Kedi Xu1,3,4 & Gang pan 2
Brain-machine interfaces (BMIs) provide a promising information
channel between the biological brain and external devices and are
applied in building brain-to-device control. Prior studies have
explored the feasibility of establishing a brain-brain interface (BBI)
across various brains via the combination of BMIs. However, using BBI
to realize the efficient multidegree control of a living creature,
such as a rat, to complete a navigation task in a complex environment
has yet to be shown. In this study, we developed a BBI from the human
brain to a rat implanted with microelectrodes (i.e., rat cyborg),
which integrated electroencephalogram-based motor imagery and brain
stimulation to realize human mind control
of the rat’s continuous locomotion. Control instructions were
transferred from continuous motor imagery decoding results with the
proposed control models and were wirelessly sent to the rat cyborg
through brain micro-electrical stimulation. The results showed that
rat cyborgs could be smoothly and successfully navigated by the human
mind to complete a navigation task in a complex maze.
Our experiments indicated that the cooperation through transmitting
multidimensional information between two brains by computer-assisted
BBI is promising.

Direct communication between brains has long been a dream for people,
especially for those with difficulty in verbal or physical language.
Brain-machine interfaces (BMIs) provide a promising information
channel between the brain and external devices. As a potential human
mind reading technology, many previous BMI studies have successfully
decoded brain activity to control either virtual objects1–3 or real
devices4,5. On the other hand, BMIs can also be established in an
inverse direction of information flow, where computer-generated
information can be used to modulate the function of a specific brain
region6–8 or import tactile information back to the brain9–11. The
combination of different types of BMI systems can thus help to realize
direct information exchange between two brains to form a new
brain-brain interface (BBI). However, very few previous studies have
explored BBIs across different brains12. Miguel Pais-Vieira et al.
established a BBI to realize the real-time transfer of behaviorally
meaningful sensorimotor information between the brains of two rats13.
While an encoder rat performed a senso- rimotor task, samples of its
cortical activity were transmitted to matching cortical areas of a
“decoder” rat using intracortical micro-electrical stimulation (ICMS)
on its somatosensory cortex. Guided solely by the information provided
by the encoder rat’s brain, the decoder rat learned to make similar
behavioral selections. BBIs between humans have also been preliminary
explored. One example of a BBI between humans detected motor intention
with EEG signals recorded from one volunteer and transmitted this
information over the internet to the motor cortex region of another
volunteer by transcranial magnetic stimulation, which resulted in the
direct information transmission from one human brain to another using
noninvasive means14. In addition to information transfer between two
brains of the same type of organism, the BBI enables information to be
transferred from a human brain to another organism’s brain. Yoo et al.
used steady-state visual evoked potential (SSVEP)-based BMI to

Figure 1. Experiment setup. (a) Overview of the BBI system. In the
brain control sessions, EEG signal was acquired and sent to the host
computer where the motor intent was decoded. The decoding results were
then transferred into control instructions and sent to the stimulator
on the back of the rat cyborg with preset parameters. The rat cyborg
would then respond to the instructions and finish the task. For the
eight-arm maze, the width of each arm was 12 cm and the height of the
edge was 5 cm. The rat cyborg was located in the end
of either arm at the beginning of each run. And preset turning
directions were informed vocally by another participant when a new
trial started. (b) Flowchart of the proposed brain-to-brain interface.
extract human intention and sent it to an anesthetized rat using
transcranial focused ultrasound stimulation on its brain, thereby
controlling the tail movement of the anesthetized rat by the human
brain15. In a very recent work, a BBI was developed to implement
motion control of a cyborg cockroach by combining a human’s SSVEP BMI
and electrical nerve stimulation on the cockroach’s antennas16. The
cyborg cockroach could then be navigated by the human brain to
complete walking along an S-shaped track.
Although the feasibility of BBIs has been preliminarily proven, it is
still a big challenge to build an efficient BBI for the multidegree
control for the continuous locomotion of a mammal in a complex
environment. In the current study, we present a wireless
brain-to-brain interface, through which a human can mind control a
live rat’s contin- uous locomotion. Different from the control of
lifeless devices, it is critical to have highly demanding instantane-
ity in the control of a living creature in real time due to its
agility and self-consciousness. For this purpose, the BBI system
requires timely reactions and a high level of accuracy in terms of
information decoding and importing, as well as real-time visual
feedback of the rat’s movement. The SSVEP-based BMI, as used for brain
intention decod- ing in previous BBI works that have depended on
visual stimulation, may distract the manipulator from reacting
promptly to real-time visual feedback. As an alternative solution,
motor imagery-based BMI has the advantages of rapid response and a low
level of distraction from the visual feedback. Therefore, the BBI
system established in the current study integrates control
instructions decoded by noninvasive motor imagery with neural
feedback, and the instructions are sent back to the rat’s brain by
ICMS in real time. We also proposed and compared two different control
models for our BBI system, the thresholding model (TREM) and the
gradient model (GRAM), to provide a more natural and easier process
for the manipulator during steering control. With this interface, our
manipulators were able to mind control a rat cyborg to smoothly
complete maze navigation tasks.
Results
Set up of BBI system and task design. The BBI system in the current
study consisted of two parts: a noninvasive EEG-based BMI and a rat
cyborg system17 (Fig. 1(a)). The EEG-based BMI decoded the motor
intent of left and right arm movement, which corresponded to the
generation of instruction Left and Right turning, respectively. In the
current study, the average EEG signal control accuracy of all 6
manipulators was 77.86 ± 12.4% over all the experiments conducted. The
eye blink signals in the EEG were used to elicit the instruction
Forward/ Reward, which was detected by the amplitude of EEG signal in
the frontopolar channel. The rat cyborgs were prepared based on
previous works17–20 and were well-trained before experiments were
conducted in this study (see Methods for more details). Two parts of
the system were connected through an integration platform, sending
decoded instructions from motor intent to the rat cyborgs, and
providing visual information feedback in real time. An overview of the
BBI system is presented in Fig. 1.
The control effect of the rat cyborgs was evaluated by a turning task
on an eight-arm maze. A complete run of the turning task contained a
total of 16 turning trials, with eight left turnings and eight right
turnings. To avoid the influence of the memory and training experience
of the rats, the turning direction sequence was randomly

Figure 2. (a) Performance of manual control stage. The mean CPT of
each rat cyborg for manual control across all sessions. (Note: For
display, only positive standard deviations are presented as error
bars). (b) Different areas assigned in the investigation for the
optimal area. The simplified plus-maze was modified from the original
eight-arm maze by blocking four crossing arms. (c) The averaged
success rate (mean ± SD) of each area for the rat cyborgs to receive
instructions with manual control.
generated by computer before each task run. The targeted turning
direction of each trial was informed vocally by other experimenters at
the start of each trial during the turning control experiments. For
each run, the rats were placed at the end of one of the eight arms as
a starting point. The rat was then driven towards the center of the
maze and guided to turn into one of the adjacent arms. A trial was
regarded as successful when the rat performed a correct turning and
reached the end of the target arm. A new trial would then start when
the rat reached the end of one arm and turned its head back towards
the center of the maze. If the rat failed to complete one turning
trial, the same turning direction trail was repeated until the rat
succeeded. The total time from the start to the end of completing 16
correct trials was recorded as the completion time (CPT) of each run.
The turning accuracy (TA) was then calculated as the ratio of the
number of correct turns to the total number of turns performed.
The entire experiment contained three stages, one manual control stage
and two brain control stages, with each stage containing 5 sessions
and being performed on five consecutive days. Each session consisted
of 3 inde- pendent runs, with an interval break time between each run
of at least eight minutes. The entire procedure was video recorded,
and the mouse clicking sequences during manual control stage were
recorded for further analysis. In the second and third stages, two
different control models (GRAM and TREM, see details in the Methods)
were applied. To further test the applicability of brain control, the
rat cyborgs were controlled to complete a navigation task in a more
complicated maze.
Manual control of rat cyborg. During the manual control stage, the rat
cyborgs were controlled by experi- enced operators. We found that the
turning accuracy of a well-trained rat cyborg could achieve an
exceptionally high rate of nearly 100%. As displayed in Fig. 2(a), the
average CPT of all rat cyborgs at the first session of manual control
was 190.03 ± 75.41 s and decreased to 132.56 ± 12.39 s at the fifth
session. Most of the rats showed an obvi- ous learning curve through
the manual control stage. The CPT of each rat cyborg became very close
at the end of the manual control stage, indicating that they were
becoming familiar with the task environment and the control

Figure 3. (a) Average CPT across all rat cyborgs for the three
consecutive stages. (b) Average turning accuracy across all rat
cyborgs for the three consecutive stages. Error bars indicate the
standard deviation. *Indicates
p < 0.05.
instructions delivered into their brains. There was no significant
difference (paired T-test, p > 0.05) between the average CPT of the
last two sessions of the manual control stage for each rat cyborg,
which indicated that the rat cyborgs were in a steady state.
During the manual control sessions, we noticed that the successful
turning behavior of a rat cyborg was highly dependent on the timing of
the turning instructions (Fig. 2(b)). To optimize the instruction
timing, an additional experiment was conducted. In this experiment,
the rats were placed at the end of the plus-maze, which was modified
from the original eight-arm maze, to wait for instructions to turn
left or right. By delivering turning instructions while the rats’
bodies were located in different sections along the straight arm, the
instruction timing could be evaluated by the turning success of the
rats. Figure 2(c) shows the overall performance of the turning success
rate at five equally divided sections of the maze. According to the
success rate of this plus-maze test, the best location for the rat
cyborg to receive turning instructions was the area near the
intersection (areas C and D in Fig. 2(b)). When considering brain
control conditions, motor imagery should be initiated slightly before
the optimal point for manual control because the decoding process and
instruction generation take a short period of time. Thus, in our
study, the manipulators were asked to start motor imagery when the
rats arrived at areas D and E.
BBI evaluation. After stage 1 of manual control, two further brain
control stages were performed by several brain control manipulators.
In the two brain control stages, the manipulators controlled the rat
cyborgs with a BBI (Fig. 1(a)) based on one of the two proposed
control models. During the first brain control stage (stage 2), the
gradient model (GRAM) was applied, and in the second brain control
stage, the thresholding model (TREM) was applied. The two control
models were based on different threshold calculating strategies. The
thresholds were used to differentiate the decoding results attributed
to real intention or noise (see Methods for a detailed expla- nation
of thresholds). The results of the two control models are shown in
Fig. 3. The overall CPT value remained stable in both brain control
stages, with no significant difference between the two sessions inside
each stage (Fig. 3(a), paired T-test for the average CPT, p > 0.05).
However, a comparison between the two brain control stages showed that
a longer time was taken to complete the same maze tasks with the
TREM-based BBI system. The average CPT of all rat cyborgs across the
GRAM-based stage 2 was shorter than the TREM-based stage 3

Figure 4. (a) Average number of turning instructions for all the rat
cyborgs across all the sessions and a comparison of the group-level
number of turning instructions between different stages. (b) Average
number of Forward instructions for all the rat cyborgs across all
sessions and a comparison of the group-level number of Forward
instruction between different stages. ***indicates p < 0.01,
*indicates p < 0.05, paired T-test.
(243.41 ± 12.73 s vs. 275.05 ± 14.47 s, paired T-test, p < 0.05),
demonstrating that the GRAM model was better than the TREM model for
the proposed BBI system.
As shown in Fig. 3(b), the average turning accuracy of all rat cyborgs
dropped approximately 15% at the first session of brain control stage
2 compared to that in the manual control stage. The turning accuracy
then grad- ually increased back to 98.08 ± 2.31% at the last session
in stage 2, indicating that the rat cyborgs could quickly be
accustomed to the transition of different control styles. The drop of
the fourth session was most likely due to the poor performance (81.67
± 5.44%) of one rat cyborg. When the brain control model changed from
GRAM at stage 2 to TREM at stage 3, the turning accuracy slightly
dropped to 90.35 ± 5.03% in the first session of stage 3 and then
generally increased across the remainder of the last stage. The group
level of turning accuracy on average for stage 2 and 3 was 91.75 ±
3.85% and 93.32 ± 1.73%, respectively (stage 2 vs. stage 3, paired
T-test, p > 0.05). Overall, the turning accuracy of stage 2 and stage
3 demonstrated stable behavior results of brain control on rat cyborgs
at the group level.
We further analyzed the sending number of different instructions among
the three stages. Figure 4(a) shows the average number of Left and
Right turning instructions to complete an experimental run across
sessions of all the rat cyborgs tested. Theoretically, the minimum
number of turning instructions given in a 100% accuracy run is 16,
which can hardly be reached even by experienced manual control.
Compared with the GRAM-based and the TREM-based brain control stages,
the group-level number of turning instructions were 60.15 ± 7.33 and
87.98 ± 56.30 (stage 2 vs. stage 3, paired T-test, p < 0.01),
respectively. Thus, more turning instructions were needed to steer the
rat cyborg with TREM-based brain control. Since the number of turning
instructions was largely affected by the accuracy of the instructions,
the extra instructions in TREM were most likely used to compensate the
effect of wrong turning behavior. As we mentioned above, instructions
given with a proper timing contributed to fewer mistakes; therefore,
the lower number of turning instructions in the GRAM-based brain
control stage demonstrated that there was less error turning
correction in GRAM-based stage 2 than in TREM-based stage 3.

Figure 5. The delay between the start of decoding result output and
the instruction generation refers to the thresholds for GRAM and TREM.
***indicates p < 0.01, T-test.
As shown in Fig. 4(b), the group level average of Forward instructions
across the sessions of GRAM-based and TREM-based brain control was
228.14 ± 44.44 and 286.70 ± 13.57, respectively. The statistical
analysis indi- cated that the sending number of Forward instructions
had no significant difference (stage 2 vs. stage 3, paired T-test, p =
0.09) between the two brain control stages. This may be due to the
large fluctuation in the first two sessions of stage 2, which might
have been caused by the transition from manual control to brain
control. On one hand, the brain-control manipulators needed to gain
experiences in controlling rats. On the other hand, the rat cyborgs
also needed time to get adapted to new controlling strategy,
especially the different stimulation timing and frequency from manual
control. When only the later three sessions of stage 2 and stage 3
were compared, the sending Forward instruction did show a significant
difference (later three sessions, stage 2 vs. stage 3, paired T-test,
p = 0.03). This result demonstrated that the TREM-based brain control
model requires more Forward instructions for the rat cyborgs to
complete the same turning tasks. The reason for more Forward
instructions with the TREM-based brain control model was the rat
cyborgs had a worse performance with the TREM model and required more
turning and forward instructions to correct the wrong behavior.
To explain the different performances of GRAM- and TREM- based brain
control strategies, we also calcu- lated the short delays occurred
between decoding result output from EEG device and instructions
generated by two different control models. Our results showed a nearly
70% reduction of instruction generation delay with GRAM (155.01 ± 3.10
ms) compared to TREM (494.70 ± 47.22 ms) (Shown in Fig. 5). Turning
instructions were thus generated and sent much quicker after the motor
imagery with the GRAM model, which ensured less wrong turning behavior
of the rat cyborgs and better turning performance.
The BBI system was further tested in a maze of higher complexity to
test its applicability and stability. The rats were asked to complete
a series of preset navigation tasks such as climbing and descending
steps, turning left or right, and going through a tunnel in a
three-dimensional maze under control of the BBI system. When the rat
went into a wrong direction or turned into an unexpected route, the
manipulator needed to guide the rat back to the correct route (Fig. 6,
see more details in Supplementary Video 1). 5 minutes completion time
for each run was limited as the criterion for evaluating success rate.
A successful run was defined as the rat cyborgs finish all of preset
navigation tasks following the route within the limited time. All rats
participated in turning tasks were tested with the optimized
GRAM-based brain control model in the maze task. The rats all
performed well with high success rate in 10 consecutive tests (Table
1).
Discussion
Our study demonstrated the feasibility of cultivating an information
pathway between a human brain and a rat brain. With our BBI system, a
rat cyborg could accurately complete turning and forward behavior
under the control of a human mind, and could perform navigation tasks
in a complicated maze. Our work extended and explored the further
possibility of functional information transmission from brain to
brain. Unlike mechanical robots, the rat cyborgs have
self-consciousness and flexible motor ability, which means the rat
cyborgs will have unexpected movements depending on their own will
during the control period. The BBI system should thus be designed with
high instantaneity and real-time feedback for a better control effect.
Previous brain-to-brain sys- tems have mainly been based on the SSVEP
paradigm15,16. In the SSVEP paradigm, the manipulators must switch
their attention between the feedback screen and the flickers. However,
rat cyborgs move quickly and require a minimum frequency of Forward
instructions above 3 Hz. It is thus difficult for the human
manipulator to send a high frequency of Forward instructions and
simultaneously watch the locomotion of the rat cyborgs in the feed-
back screen. Compared with previous works15,16, we used motor imagery
and eye blink as manipulative protocols and provided real-time visual
feedback of the rat cyborg, which is comparably more viable and avoids
the visual fatigue of the manipulators. In addition, during the rat
cyborgs brain control experiments, the overall perfor- mance was
influenced by several major factors:

Figure 6. The rat cyborg was navigated by human brain control in a
more complex maze (see more details in Supplementary Video 1). The
three-dimensional maze was more complicated, consisting of a start
point and an end point, slops and stairs for climbing and descending,
a raised platform with a height of half a meter, pillars to be avoided
and a tunnel to be passed through. The rat cyborgs were asked to
complete the navigation task along the preset route (red arrowed)
within 5 minutes.
    Rat cyborg
  Success
  Total
  Success rate
  A01 8
A02 9
A03 8
A04 9
A05 10
A06 10
Average 9
10 80%
10 90%
10 80%
10 90%
10 100%
10 100%
10 90%
             Table 1.
Success rate of brain control in the complex maze.
   (1) The accuracy of instructions. The decoding correctness of motor
imagery and the appropriate timing of control instructions influence
the control performance the most. Furthermore, the instruction should
be sent with high instantaneity, especially when an unexpected mistake
occurs. In our brain control sessions, the correctness mainly depended
on the threshold value and the timeliness of triggering instruction
determined by the control models. The better performance (less CPT and
number of turning and forward instructions) for GRAM-based BBI is most
likely due to less delay between the start of the decoding results and
the release of instructions. Comparatively, the longer delay occurred
in TREM may probably contrib- ute to a longer CPT, which in turn
resulted in greater amount of instructions needed to complete the
task. Besides, the longer delay in the TREM model also leads to
obstruction of motor imagery. The manipulators reported that the delay
of instruction release during TREM brain control could not readily be
perceived. Although the manipulators tried to begin imagery in
advance, it was difficult to decide the concrete timing and difficult
to operate when instructions were needed to be released over a short
period. In contrast, with the short response duration in GRAM, the
manipulators were able to start motor imagery at the optimal
instruction-receiving time, and switching between Left and Right
instructions was much easier.
(2) Adaption of the manipulators to brain control task. The mental
status of a manipulator can be influenced by disturbance, such as
environmental noise, and fatigue caused by long-duration imagery. The
ability to overcome these could be improved after several practice
sessions. The noninvasive EEG-based BMI used in this study translates
the sensorimotor rhythms detected in the bilateral motor areas to the
control signal for the rat cyborg. This is not intuitive to the
manipulators at the beginning of the experiment, but becomes more
intuitive as the experiment goes on. The manipulators gradually learn
what instruction should be sent and when their imagery should begin
according to the movements and locations of the rat cyborg, thereby
cultivating a tacit understanding between the human and the rat
cyborg. The stable level of perfor- mance seen in the latter stage 2
and stage 3 indicates this mutual adaption.
(3) The inherent adaptive ability of rat cyborgs. Rat cyborgs possess
an inherent adaptive ability to their environment and the control
method. The overall decrease of average CPT in the manual control
stage

indicates the adaption of rat cyborgs to the control instructions. The
variation trend of each line indicates the various adaption abilities
among rat cyborgs. Intriguingly, the final CPT of each rat cyborg
reached a similar level. It is likely that all of the rat cyborgs
adapted to the same control pattern of the operator. In addition, the
rat cyborgs can also adapt to the changes of instruction release due
to their excellent learning ability. The results showed that the
performance was adversely affected by changes in the control mode
(stage 1, session 5 vs. stage 2, session 1 and stage 2, session 5 vs.
stage 3, session 1 in Fig. 3(a,b)) but subse- quently stabilized. The
decrease in the turning accuracy from stage 1 to stage 2 was much
sharper than the change from stage 2 to stage 3. This may be because
the control pattern is more distinct between manual control and brain
control. While between different brain control stages, the
manipulator’s control pattern was not likely to dramatically alter.
(4) In conclusion, our findings suggest that computer-assisted BBI
that transmits information between two entities is intriguingly
possible. The control model proposed here could transfer the decoding
results of motor imagery-based EEG-BMI to other external devices with
remarkable instantaneity. In the future, er- ror-related potentials
(ErrPs)21 could be used to detect false generated instructions,
thereby eliminating the wrong instructions before sending them to the
rat cyborgs. Furthermore, information flow will be made bidirectional
and communicative between two human individuals.
Methods
Participants and ethics statement. Six rats were engaged in this
study. All methods were carried out in accordance with the National
Research Council’s Guide for the Care and Use of Laboratory Animals.
All exper- imental protocols were approved by the Ethics Committee of
Zhejiang University, China. Informed consent was obtained from all
manipulators.
Rat cyborg preparation. The rat cyborg system had long been developed
in our previous research work. Briefly, bipolar stimulating electrodes
were made from pairs of insulated nichrome wires (65 μm in diameter),
with a 0.5 mm vertical tip separation. Microelectrodes were implanted
into the rat’s brain for the control of their locomotion. Two pairs of
electrodes were implanted in the bilateral medial forebrain bundle
(MFB)22 for virtual reward stimulation and instruction of forward
moving. The other two pairs of electrodes were implanted sym-
metrically in both sides of the whisker barrel fields of somatosensory
cortices (SIBF)23 for turning cue stimula- tion. The rats were allowed
to recover from the surgery for one week before the experiments. Once
recovered, the rat cyborgs were first trained to correlate the
stimulations with the corresponding locomotion behaviors17. The
parameters of the electrical stimulation that were sent into the rat’s
brain were based on our previous works24, which can activate
appropriate behavior but avoid seizures even after a long duration of
stimulation. During the training and control sessions, electrical
stimulations were delivered through a wireless microstimulator mounted
on the rat’s back. Control instructions were given by operators with a
computer program wirelessly connected to the microstimulator through
Bluetooth.
Decoding in the BBI. A commercial EEG device, Emotiv EPOC (Emotiv
Inc., USA)25 was used in this study for EEG data recording. EEG data
were acquired with a 14-channel neuroheadset, with all electrode
impedances kept below 10 kΩ. During the brain control experiments, the
EEG signals were sampled at the rate of 256 Hz. The recorded data were
then wirelessly transmitted to a host computer through Bluetooth and
further processed with the help of Emotiv SDK. Through trained
imagination, the manipulators learned to modulate their sensorimotor
rhythm amplitude in the upper mu (10–14 Hz) frequency band26,27. The
power spectrum of left and right compo- sition was then obtained as
the intensity of motor imagery by common spatial pattern (CSP)28,
i.e., xL(t) and xR(t), respectively. Details of the common spatial
pattern filter are described as follows:
Let XR and XL denote the preprocessed EEG during right- or left-hand
movements with dimensions N × T, where N is the number of channels and
T is the number of samples per channel. The common spatial pattern
filter is acquired as follows:
(1) Calculate the normalized channel covariance of XR and XL as:
cov(XL)
trace(XLXL) (1)
cov(XR) T
CL = CR =
trace(XRXR) (2) (2) Average the CL and CR on all of the left- and
right-hand movement EEG trials; the composite spatial covari-
T
  Scientific RepoRts |
00
(4) Perform whitening transform on CL and CR, and the transformed
spatial covariance matrixes are:

SL = PCLPT (5) SR = P CRPT (6)
where,
(5) Perform eigenvalue decomposition on the transformed spatial
covariance matrix, where:
S =UΣUT (8) L LLL
S = U Σ UT (9) R RRR
(Note that ΣL + ΣR must be an identity matrix);
(6) The eigenvectors corresponding to the largest eigenvalue in ΣL and
ΣR are chosen to calculate the common
spatial pattern filters for right- and left-hand movements, which can
be written as:
SF = U (i|argmax Σ (i))P
LL iL (10)
SF = U (j|argmax Σ (j))P
RR jR (11)
(7) Let x(t) be the preprocessed EEG signal recorded in movement
imaginary application, the intensity of left- and right-hand movement
imagery can be given as:
xL(t) = SFLx(t) (12) xR(t) = SFRx(t) (13)
(8) Finally, calculate the power spectral density of xL(t) and xR(t),
and aggregate the band power within the overlapping window length of
k.
(14) (15)
where P(x(t)) indicates the power spectral density of x(t). The
intensity of motor intent was then mapped to a value ranged from 0 to
1, and the normalized B(t) was used as the input of the control model.
Set up of BBI system. As the BBI system consisted of a noninvasive
EEG-based BMI and a rat cyborg system, a controlling program written
in Visual C++ was applied to acquire EEG raw data from Emotiv SDK,
generate instructions with the control models and trigger the release
of instructions to the rat cyborg. The loco- motion and location of
the rat cyborg in the entire experimental scene was video captured by
a top-viewed cam- era and visually delivered back to the manipulators
on an LCD screen in real time. The decoding results of motor imaginary
were relayed using a flashing instruction feedback panel that was
integrated in the bottom of the LCD by an OpenCV (Open Source Computer
Vision Library, http://opencv.org)-based self-written program. The EEG
decoding results and motor control instructions were recorded with a
J2EE (Java 2 Platform Enterprise Edition)-based program for further
analysis.
Control models for BBI. The inputs of the control model included the
decoding results of Left or Right
motor imagery and eye blink detection. The collected EEG signals were
projected by a common spatial pattern
(CSP) spatial filter. Next, the power spectrum of left and right
composition was obtained as the intensity of motor
imagery, i.e., xL(t) and xR(t), respectively. Eye blink, xF(t), was
detected when the EEG signal (E(t)) amplitude of channels near the
eyes exceeds a threshold →θEOG.

The output of the control model was a control signal for the
microelectrical stimulations. xL(t), YR(t) and YF(t) represent the
Left, Right and Forward instructions, respectively.
For the safety of the rat cyborgs, instructions should be sent under
the following rule: If two instructions were presented continuously,
the latter instruction would only be sent when the time interval was
larger than a prede- fined threshold ΔT. Adjacent instructions were
defined as tuples <C1, C2>, C1, C2 ∈{Left, Right, Forward}. Five
P = Σ−1/2UT (7) 0
t
B (t) = P(x (t))
L∑L t−k
t
B (t) = P(x (t))
R∑R t−k
→→ 1, E(t) ≥ θ
xF(t) =  EOG
0, Otherwise (16)
→

Figure 7. Samples of decoding results and their corresponding
gradients of motor imagery in a preliminary experiment. The blue curve
is the result of right imagery (Right) and the orange curve is the
result of left imagery (Left). During the right turning period shown
in the figure, only right imagery occurred, while in the left turning
period, both left and right results appeared. The right decoding
results were deemed to be caused by noise. In addition, the left
decoding results appearing in the blank period (no imagination) are
regarded as noise as well. The gradL (yellow) and gradR (light blue)
curves represent the left and right gradient of corresponding decoding
results, respectively. θL and θR are the optimal thresholds for left
and right motor imagery in TREM. For GRAM, the optimal thresholds are
θL and θR.
out of nine types of tuples were restricted, namely, ΔT<F,F>, ΔT<L,L>,
ΔT<R,R>, ΔT<L,F> and ΔT<R,F>. These five were determined by the number
distribution of the interval for each tuple based on the manual
control sequence record. To guarantee the proper reaction, the level
of excitement and the safety of the rat cyborgs, the intervals of
ΔT<F,F>, ΔT<L,L>, ΔT<R,R>, ΔT<L,F> and ΔT<R,F> for brain control were
set to be 200 ms, 500 ms, 500 ms, 350 ms and 350 ms, respectively,
according to our previous work17. The minimum time interval was not
restricted for F-L, F-R, R-L and L-R because the manipulator needed to
send the first turning command as quickly as possible.
We defined n = 0, 1, ... as the n-th generation of instruction, and
tL(n), tR(n) and tF(n) were the times that an instruction occurred.
Initially, tL(n), tR(n) and tF(n) were equal to 0 (n = 0). The
generation of Forward was the same for the two models, as described
below:
(17)
    
 t (n)−t (n−1)≥ΔT ,   F F <F,F> 
  t (n) − t (n − 1) ≥ ΔT , FL <L,F>
1, t∈tF Y(t)= 
 t (n)−t (n−1)≥ΔT 
F        
F
a n d
R
x F ( t F ) = 1
<R,F>  
0, otherwise 
Two models (Fig. 1(b)) for generating Left and Right instructions were
proposed. One was called the thresh- olding model (TREM), in which the
instructions were generated when the decoding results exceeded a
threshold (θ). The other model was the gradient model (GRAM), in which
the instructions were generated when the gradi- ent value between two
decoding results transcended a threshold (θ′). The thresholds were
used to differentiate the decoding results attributed to real
intention or noise. Figure 7 demonstrates typical decoding results of
a left and right imagery and their corresponding gradients.
Thresholding Control Model. For TREM, controlling impulses were
generated when the intensity of left or right exceeded a threshold θ.
A turning instruction was generated if xL(t) > θL or xR(t) > θR.
Therefore, the function of TREM is described as follows:

1, t∈{t|t(n)−t(n−1)≥ΔT and x(t)≥θ} LLL <L,L>LLL
YL(t) = 
0, otherwise
 (18)

1, t∈{t|t(n)−t(n−1)≥ΔT and x(t)≥θ} RRR <R,R>RRR
YR(t) = 
0, otherwise
 (19)
Gradient Control Model. Although the threshold in TREM could
differentiate the decoding results attributed to real intention or
floating background noise, the delay between the start of the decoding
results and the generation of instruction was too long. We proposed an
improved model, GRAM, that outperformed in both differentiation and
instantaneity. For GRAM, instructions were generated when the gradient
value between two decoding win- dows transcended a threshold θ′. The
gradient value was calculated as follows:
Gradx(t) = x(t) − x(t − 1) (20) A turning instruction was generated if
Grad x(t) > θ′. Accordingly, the function of GRAM is described as
follows:


1, t∈{t|t(n)−t(n−1)≥ΔT
 LL L <L,L>
and Gradx(t)≥θ′} LL L
and Gradx(t)≥θ′} RR R
YL(t) = 
0, otherwise
(21) (22)


1, t∈{t|t(n)−t(n−1)≥ΔT
 RR R <R,R>
YR(t) = 
0, otherwise

The thresholds θ and θ′ were decided prior to the implementation of
brain control. To ascertain the optimal threshold, a preliminary
experiment was conducted. The manipulators were asked to complete
three rounds of eight motor imagery tasks. Intents were decoded in
real time, and the decoding results were recorded. The best threshold
was determined with a receiver operating characteristic (ROC) curve.
Data Availability
The datasets generated during and/or analyzed during the current study
are available from the corresponding author on reasonable request.
References
1. Royer, A. S., Doud, A. J., Rose, M. L. & He, B. EEG control of a
virtual helicopter in 3-dimensional space using intelligent control
strategies. Ieee T Neur Sys Reh 18, 581 (2010).
2. Xia, B. et al. A combination strategy based brain–computer
interface for two-dimensional movement control. J Neural Eng 12, 46021
(2015).
3. Wolpaw, J. R. & McFarland, D. J. Control of a two-dimensional
movement signal by a noninvasive brain-computer interface in humans. P
Natl Acad Sci Usa 101, 17849 (2004).
4. Carlson, T. & Millan, J. D. R. Brain-controlled wheelchairs: a
robotic architecture. Ieee Robot Autom Mag 20, 65 (2013).
5. Meng, J. et al. Noninvasive electroencephalogram based control of a
robotic arm for reach and grasp tasks. Sci Rep-Uk 6, 38565
(2016).
6. Hoy, K. E. & Fitzgerald, P. B. Brain stimulation in psychiatry and
its effects on cognition. Nat Rev Neurol 6, 267 (2010).
7. Chen, R. et al. Depression of motor cortex excitability by
low‐frequency transcranial magnetic stimulation. Neurology 48, 1398
(1997).
8. Nguyen, J. et al. Repetitive transcranial magnetic stimulation
combined with cognitive training for the treatment of Alzheimer’s
disease. Neurophysiologie Clinique/Clinical Neurophysiology 47, 47 (2017).
9. O Doherty, J. E. et al. Active tactile exploration using a
brain–machine–brain interface. Nature 479, 228 (2011).
10. Flesher, S. N. et al. Intracortical microstimulation of human
somatosensory cortex. Sci Transl Med 8, 141r (2016).
11. Romo, R., Hernández, A., Zainos, A. & Salinas, E. Somatosensory
discrimination based on cortical microstimulation. Nature 392,
387 (1998).
12. Min, B., Marzelli, M. J. & Yoo, S. Neuroimaging-based approaches
in the brain-computer interface. Trends Biotechnol 28, 552 (2010).
13. Pais-Vieira,M.,Lebedev,M.,Kunicki,C.,Wang,J.&Nicolelis,M.A.Abrain-to-braininterfaceforreal-timesharingofsensorimotor
information. Sci Rep-Uk 3, 1319 (2013).
14. Rao, R. P. et al. A direct brain-to-brain interface in humans.
Plos One 9, e111332 (2014).
15. Yoo, S., Kim, H., Filandrianos, E., Taghados, S. J. & Park, S.
Non-invasive brain-to-brain interface (BBI): establishing functional
links
between two brains. Plos One 8, e60410 (2013).
16. Li, G. & Zhang, D. Brain-computer interface controlled cyborg:
establishing a functional information transfer pathway from human
brain to cockroach brain. Plos One 11, e150667 (2016).
17. Feng, Z. et al. A remote control training system for rat
navigation in complicated environment. J Zhejiang Univ-Sc A 8, 323
(2007).
18. Talwar, S. K. et al. Behavioural neuroscience: Rat navigation
guided by remote control. Nature 417, 37 (2002).
19. Wang, Y. et al. Visual cue-guided rat cyborg for automatic
navigation [research frontier]. IEEE Comput Intell M 10, 42 (2015).
20. Yu, Y. et al. Intelligence-augmented rat cyborgs in maze solving.
Plos One 11, e147754 (2016).
21. Chavarriaga, R., Sobolewski, A. & Millán, J. D. R. Errare
machinale est: the use of error-related potentials in brain-machine
interfaces. Front Neurosci-Switz 8, 208 (2014).
22. Hermer-Vazquez, L. et al. Rapid learning and flexible memory in
“habit” tasks in rats trained with brain stimulation reward. Physiol
Behav 84, 753 (2005).
23. Paxinos, G. & Watson, C. The rat brain in stereotaxic coordinates.
(Elsevier, Academic Press, Amsterdam, 2014).
24. Xu, K., Zhang, J., Zhou, H., Lee, J. C. T. & Zheng, X. A novel
turning behavior control method for rat-robot through the stimulation
of ventral posteromedial thalamic nucleus. Behav Brain Res 298, 150 (2016).
25. Martinez-Leon, J., Cano-Izquierdo, J. & Ibarrola, J. Are low cost
Brain Computer Interface headsets ready for motor imagery
applications? Expert Syst Appl 49, 136 (2016).
26. Pfurtscheller, G., Neuper, C., Flotzinger, D. & Pregenzer, M.
EEG-based discrimination between imagination of right and left hand
movement. Electroencephalography and clinical Neurophysiology 103, 642 (1997).
27. Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G.
& Vaughan, T. M. Brain–computer interfaces for communication
and control. Clin Neurophysiol 113, 767 (2002).
28. Kumar, S., Mamun, K. & Sharma, A. CSP-TSM: Optimizing the
performance of Riemannian tangent space mapping using common
spatial pattern for MI-BCI. Comput Biol Med 91, 231 (2017).


More information about the cypherpunks mailing list