[project][notes][crazy] Digital Sewing of an EEG Cap
I had downtime at a library and looked into this a little. - Even if you don’t have a cnc sewing machine, you can do the cutting with a die cutter device like a cricut and make the sewing very easy and clear. - I websearched sewing a cap, and the example I found used four panels sewn flat with curved proportions such that they fit a head when expanded. The fabric cuts are offset from the theoretical edges to leave room for sewing. The example used whole ellipse-like segments bonded at the edges. This made a spheroid-like final result, which was inverted once to hide the sewing, then the bottom inverted again into the top to make two layers. - You could calculate the proportions of these panels for an individual head via measurement or medical imagery. I started by assuming a mathematical spheroid with two unique dimensions. - I think that panels that cover a spheroid are elliptical shapes where the height equals the height of the spheroid (possible mistake, reconsider?), and the width of each is the circumference of the spheroid divided by the number of panels. It took me a while to realize this again. - Scratch code at https://www.kaggle.com/code/superjunk/notebookfdb0c5e39b uses matplotlib to draw a parametrically-designed sewing pattern for a spheroid. The larger curve is offset to cut. The curve inside it is to fold adjacent parts and sew. It looks as if you could actually do it with only one panel if desired, unsure. - ellipse width = circumference / panel count - ellipse height = height # i think this is a mistake and instead should be the vertical arclength - ellipse-like arclengths are calculated using a function called the “incomplete elliptic integral of the second kind” or E, where if a and b are the ellipse radii in either order, then between angle a1 and a2 it’s something like arclength = b(E(a2|1-(a/b)^2)-E(a1|1-(a/b)^2)) - I didn’t readily find an inverse for E, so finding angle ranges to tesselate to a specific resolution precisely would mean new numeric solutions, but the result looks reasonable if a circle of average radius (r = (a+b)/2) is used. - it looks like a die cutter can generally both cut fabric as well as mark a pattern on it by placing a pen rather than a knife in it. most of the time was spent slowly realizing that the panels are exact ellipses maybe it was cool to realize you could sew 3d shapes flat by breaking them into convex surfaces and calculating the bounds ! you could sew any shape by doing this and stuffing it. Parametrically creating the design lets one place things at precise points on the surface later.
- Scratch code at https://www.kaggle.com/code/superjunk/notebookfdb0c5e39b uses matplotlib to draw a parametrically-designed sewing pattern for a spheroid. The larger curve is offset to cut. The curve inside it is to fold adjacent parts and sew. It looks as if you could actually do it with only one panel if desired, unsure.
- ellipse width = circumference / panel count - ellipse height = height # i think this is a mistake and instead should be the vertical arclength
so basically i plotted the side view, but instead i need more a surface view where y is transformed to be a polar value, stretching the pattern vertically toward the top.
- Scratch code at https://www.kaggle.com/code/superjunk/notebookfdb0c5e39b uses matplotlib to draw a parametrically-designed sewing pattern for a spheroid. The larger curve is offset to cut. The curve inside it is to fold adjacent parts and sew. It looks as if you could actually do it with only one panel if desired, unsure.
- ellipse height = height # i think this is a mistake and instead should be the vertical arclength so basically i plotted the side view, but instead i need more a surface view where y is transformed to be a polar value, stretching the pattern vertically toward the top.
I ended up making a general solve_eqn function so I would spend less time laboriously deriving unsolvable relations for one-time approximate values. Makes things much easier. Attached image is a draft sewing pattern for a sphere of radius 1. Ideally the flat ends of the border would be rounded, but I was excited to just get the lines to connect right without intersecting. Yay! import matplotlib.pyplot as plt plt.gca().set_aspect('equal', adjustable='box') plt.plot(*sewing_points(spheroid_rx=1, spheroid_ry=1, panel_count=4, offset=0, resolution=0.1)) plt.plot(*sewing_points(spheroid_rx=1, spheroid_ry=1, panel_count=4, offset=0.25, resolution=0.1)) plt.savefig('spheroid.png') plt.show()
processing medical data it's actually somewhat normal to turn your medical data into 3d models and then e.g. 3d print copies of yourself or your bones or bodyparts or fossils or anything put through a medical scanner, apparently (this turns up when websearching turning medical formats into 3d models) it looks like there are basically two major DICOM data libraries, dcmtk and gcdm . there's a small list of public data and libraries at https://github.com/open-dicom/awesome-dicom . i loaded some data (from my cancer treatment) into an open source viewer called amide, based on dcmtk, which i built from source laboriously with dependencies -- but amide doesn't let you export 3d models like other viewers do presently poking at DicomToMesh ( https://github.com/eidelen/DicomToMesh ) from the awesome-dicom link above. DicomToMesh depends on VTK ( https://gitlab.kitware.com/vtk/vtk.git ) as well as an auxiliary library called vtkDicom ( https://github.com/dgobbi/vtk-dicom ) that must be manually enabled which then in turn can link to either DCMTK or GCDM if configured to. Unless you jump through all the hoops, DicomToMesh only loads very simple datasets. When using amide or its dependency dcmtk, one specifies the DICOMDIR file; but with DicomToMesh and vtkDicom, instead one specifies the path to the root folder of the media disk. It looks like VTK 9.3 needs to be built with -DVTK_ENABLE_REMOTE_MODULES=YES -DVTK_MODULE_ENABLE_VTK_vtkDICOM=YES to include vtk-dicom that can load more datasets. The first flag needs to be enabled prior to cmake's run or the second flag isn't recognised, because the first is used before its declaration. ( troubleshooted via Remote/vtkDICOM.remote.cmake in the VTK source tree; this file also has the git url in it that the build will clone from.) Additionally, one must set -DUSE_DCMTK=ON or -DUSE_GDCM=ON to be able to load any compressed data using a general-purpose dicom library, and these other flags also only appear if the previous two flags are enabled and the remote vtk-dicom module cloned by the build. The DicomToMesh instructions say to also pass -DVTK_MODULE_ENABLE_VTK_DICOM=YES ; this is maybe an old name for the module. I also ran into a build error until I modified the -DENABLE_SHARED build flag of VTK and DCMTK to match, and I had to wipe my CMakeCache.txt and repass all the flags for this change to take. $ dicom2mesh -h How to use dicom2Mesh: Minimum example. This transforms a dicom data set into a 3d mesh file called mesh.stl by using a default iso value of 400 (shows bone)
dicom2mesh -i pathToDicomDirectory -o mesh.stl
This creates a mesh file in a binary format called mesh.stl
dicom2mesh -i pathToDicomDirectory -b -o mesh.stl
This creates a mesh file called abc.obj by using a custom iso value of 700
dicom2mesh -i pathToDicomDirectory -o abc.obj -t 700
This creates a mesh file by using a iso value range of 500 to 900
dicom2mesh -i pathToDicomDirectory -o abc.obj -t 500 -tu 900
This option offers the possibility to crop the input dicom volume. The created mesh is called def.ply.
dicom2mesh -i pathToDicomDirectory -z -o def.ply
This creates a mesh with a reduced number of polygons by 50% as default
dicom2mesh -i pathToDicomDirectory -r
This creates a mesh with a reduced number of polygons by 80%
dicom2mesh -i pathToDicomDirectory -r 0.8
This creates a mesh with a limited number of polygons of 10000. This has the same effect as reducing -r the mesh. It does not make sense to use these two options together.
dicom2mesh -i pathToDicomDirectory -p 10000
This creates a mesh where small connected objects are removed. In particular, only connected objects with a minimum number of vertices of 20% of the object with the most vertices are part of the result.
dicom2mesh -i pathToDicomDirectory -e 0.2
This creates a mesh which is shifted to the coordinate system origin.
dicom2mesh -i pathToDicomDirectory -c
This creates a mesh which is smoothed.
dicom2mesh -i pathToDicomDirectory -s
This creates a mesh and shows it in a 3d view.
dicom2mesh -i pathToDicomDirectory -v
This shows the dicom data in a volume render [ (Red,Green,Blue,Alpha,Iso-Value) ].
dicom2mesh -i pathToDicomDirectory -vo (255,0,0,0,0) (255,255,255,60,700) ...
Alternatively a mesh file (obj, stl, ply) can be loaded directly, modified and exported again. This is handy to modify an existing mesh. Following example centers and saves a mesh as cba.stl.
dicom2mesh -i abc.obj -c -o cba.stl
A mesh can be created based on a list of png-file slices as input. The three floats followed after -sxyz are the x/y/z-spacing.
dicom2mesh -ipng [path1, path2, path3, ...] -sxyz 1.5 1.5 3.0 -c -o cba.stl
Arguments can be combined. Here's a screenshot from the default output of dicom2mesh -v now that i finally got it to build at the end of the day. This is my living skull as it was in 2019, reconstructed from cancer monitoring imagery.
making a hat: - plotting measurement points involves identifying basic features of the head, like the bump over the back of the head. i tried finding a good iso in amide that showed my skin and everything, but when i set the iso to this in dicom2mesh (or dicom2stl which does the same thing by calling out to vtk under the hood), it also shows other things in the output, like restraining devices they had placed on my head and chest, or noise for my hair, which obscure the surface. how to make this not too complex, but provide for these things? maybe if i made my own isosurface extractor, it would be easier here. if an existing visualizer provides for detecting clicks, the user could maybe click objects to remove them, etc ... 0220 0253 so, embodi3d does this for you there are also a lot of AI models out there to work with this data, although they tend to focus on organs rather than the outside but maybe it would be simplest for the user to click on the top of their head, and then follow connections from there :s something like - no other objects than the one clicked - stop at thresholds for e.g. accumulated change in normal vector, especially divided by distance O_O still kind of big for confused-up karl :s maybe it would make more sense for now to manually extract a model of a head using tools, and then feed that to something the next steps would be plotting points for placement on surface, where openings would go in the hat or maybe more directly, figuring out how to design it to fit the surface. this seems the way to go. so we could assume we have a model of a scalp, and then try to figure out where to fit it [energy reduced. unsure what future is for project.] [next step: find a 3d object and fit a sewing design to its surface.
https://github.com/Slicer/Slicer https://github.com/invesalius/invesalius3 regarding sewing: - the point of digitizing the pattern is to automatically place all the electrodes, which can be laborious especially if you have a palsy and lack a partner to measure your scalp. - there are existing wash-off stabilizing things for putting patterns on cloth, and tutorials out there to print to these and transfer, but if the cloth is placed directly through the machine then it can cut all those electrode holes itself - common diecutting machines like the cricut may be limited to 12 to 24 inches square or rectangular, although some machines let you do larger if you don't use the stabilizing mat or jerryrig something by hand - the theory of the machines is basically that by using a sticky mat, they can slide the media back and forth under the head, cutting and drawing to it, without it being displaced from the cutting action, because the sticky mat holds it. ; so, jerryrig solutions for larger media may involve an alternative to a mat, such as a sticky backing, one tutorial used thin painter's tape. i imagine you could also slide the material inside a sleeve or cover it in tape. there are also a number of materials sold now for solving this problem, like cloth with a removable backing. - working the machines takes a little normal calibration to configure the blade to cut through the media but not through its stabilizing backing - it's apparently normal and supported to put fabric into the machines, but not as common. the sketch pens can often draw to fabric although it washes off. people replace them with fabric pens. the fabric again needs to be stabilized with a back of some sort to be cut by a diecutting machine. - there are stickier mats designed for fabric i think oh if i didn't link to it already i had found an open source diecutting driver called inkcut https://github.com/inkcut/inkcut future is uncertain around this rare and long-held project idea.
participants (1)
-
Undescribed Horrific Abuse, One Victim & Survivor of Many