Jahr Name
2024

Where to mount the IMU? Validation of joint angle kinematics and sensor selection for activities of daily living

L. Uhlenberg, O. Amft

Frontiers in Computer Science, Volume 6
WM3-4332-1 60/104/4: Dynamic Motion Simulation frameworks (DynaMoS)

Kurzfassung einblenden

We validate the OpenSense framework for IMU-based joint angle estimation and furthermore analyze the framework's ability for sensor selection and optimal positioning during activities of daily living (ADL). Personalized musculoskeletal models were created from anthropometric data of 19 participants. Quaternion coordinates were derived from measured IMU data and served as input to the simulation framework. Six ADLs, involving upper and lower limbs were measured and a total of 26 angles analyzed. We compared the joint kinematics of IMU-based simulations with those of optical marker-based simulations for most important angles per ADL. Additionally, we analyze the influence of sensor count on estimation performance and deviations between joint angles, and derive the best sensor combinations. We report differences in functional range of motion (fRoMD) estimation performance. Results for IMU-based simulations showed MAD, RMSE, and fRoMD of 4.8°, 6.6°, 7.2° for lower limbs and for lower limbs and 9.2°, 11.4°, 13.8° for upper limbs depending on the ADL. Overall, sagittal plane movements (flexion/extension) showed lower median MAD, RMSE, and fRoMD compared to transversal and frontal plane movements (rotations, adduction/abduction). Analysis of sensor selection showed that after three sensors for the lower limbs and four sensors for the complex shoulder joint, the estimation error decreased only marginally. Global optimum (lowest RMSE) was obtained for five to eight sensors depending on the joint angle across all ADLs. The sensor combinations with the minimum count were a subset of the most frequent sensor combinations within a narrowed search space of the 5% lowest error range across all ADLs and participants. Smallest errors were on average < 2° over all joint angles. Our results showed that the open-source OpenSense framework not only serves as a valid tool for realistic representation of joint kinematics and fRoM, but also yields valid results for IMU sensor selection for a comprehensive set of ADLs involving upper and lower limbs. The results can help researchers to determine appropriate sensor positions and sensor configurations without the need for detailed biomechanical knowledge.

 

Link to paper

2024

A Ferroelectric CMOS Microelectrode Array with ZrO2 Recording and Stimulation Sites for In-Vitro Neural Interfacing

M. T. Becker, A. Corna, B. Xu, U. Schroeder, O. Amft, S. Keil, R. Thewes, G. Zeck

IEEE BioSensors 2004, 28.-30.07.2024, Cambridge, Vereinigtes Königreich

Kurzfassung einblenden
2024

Temporal Behavior Analysis for the Impact of Combined Temperature and Humidity Variations on a Photoacoustic CO2 Sensor

A. Srivastava, P. Sharma, A. Sikora, A. Bittner, A. Dehé

IEEE Applied Sensing Conference (APSCON), 22.-24.01.2024, Goa, India

Kurzfassung einblenden
2024

Data-driven Modelling of an Indirect Photoacoustic Carbon dioxide Sensor

A. Srivastava, P. Sharma, A. Sikora, A. Bittner, A. Dehé

IEEE Applied Sensing Conference (APSCON), 22.-24.01.2024, Goa, India

Kurzfassung einblenden
2023

Novel Thermal Mems Dynamic Pressure Sensor

A. Gupta, A. Bittner, A. Dehé

22nd International Conference on Solid-State Sensors, Actuators and Microsystems (Transducers), 25.-29.06.2023, Kyoto, Japan

Kurzfassung einblenden
2023

Thermoelectrical microphone

A. Gupta, A. Bittner, A. Dehé

IEEE MEMS, 15.-19.01.2023, München, Deutschland

2023

Three-dimensional folded MEMS manufacturing for an efficient use of area

D. Becker, A. Bittner, A. Dehé

Mikrosystemtechnikkongress, 23.-26.10.2023, Dresden, Deutschland

Kurzfassung einblenden
2023

Design and Optimization of Piezoelectric MEMS Resonator Electrodes using Finite Element Methods and Image Processing

A. Srivastava, A. Bittner, A. Dehé

Mikrosystemtechnikkongress, 23.-25.10.2023, Dresden, Deutschland

Kurzfassung einblenden
2023

Design and Evaluation of a Miniaturized Non-Resonant Photoacoustic CO2 Gas Sensor with Integrated Electronics

N. Zhang, A. Srivastava, X. Li, Y. Li, Z. Zhou, A. Bittner, X. Zhou, A. Dehé

IEEE Sensors, 29.10.-01.11.2023, Vienna, Austria

Kurzfassung einblenden
2023

Development of an Indirect Photoacoustic Sensor Concept for Highly Accurate Low-ppm Gas Detection

A. Srivastava, A. Bittner, A. Dehé

XXXV EUROSENSORS Conference, 10.-13.09.2023, Lecce, Italien

Kurzfassung einblenden
2023

Micro-Electrode-Cavity-Array (MECA) on a CMOS Chip

M. Amayreh, S. Elsaegh, M. Kuderer, C. Blattert, H. Rietsche, O. Amft

Black Forest Nanopore Meeting 2023, 06.-09.11.2023, Freiburg, Deutschland

Kurzfassung einblenden
  • This work represents a CMOS-Based nanoporesensing platform for high resolution readout of nanoporeevents.
  • CMOS integration reduces the overall capacitance of the readout which reduces the overall noise and thus allows detecting of fast and/or small nanoporeevents.
  • The current readout circuit is configurable for different current ranges and bandwidths and optimized for noise suppression. The circuit is divided into four amplifier stages.
  • Noise reduction techniques achieve a total integrated noise of only 18 pARMSin a bandwidth of 1 MHz.
  • The ASIC was implemented in a 350 nm standard CMOS technology.
  • The ASIC consists of five channels. Four of them are used as a MECA.

 

Link to paper

2023

Modelling and Characterization of an Electro-Thermal MEMS Device for Gas Property Determination

P. Raimann, F. Hedrich, S. Billat, A. Dehé

Smart Systems Integration (SSI), 28.-30.03.2023, Brügge, Belgien

Kurzfassung einblenden
2023

Characterization and Modeling of Thermal MEMS for Selective Determination of Gas Properties

P. Raimann, F. Hedrich, S. Billat, A. Dehé

Sensor and Measurement Science International (SMSI), 08.-11.05.2023, Nürnberg, Deutschland

Kurzfassung einblenden
2023

Multisensorische Werkzeuge für die Kaltmassivumformung (Multisensorische Werkzeuge)

K. Grötzinger, A. Schott, B. Ehrbrecht

Abschlussbericht IGF-Vorhaben Nr. 21520 N

Kurzfassung einblenden

An diesem Projekt haben 3 Institute gearbeitet:
Institut für Umformtechnik (IFU) der Universität Stuttgart, welche den mechanischen Aufbau und die zur Umformung benötigte Presse zur Verfügung gestellt hat. Zudem wurden die zu erwartenden Sensordaten durch Simulationen berechnet.

 

Fraunhofer-Institut für Schicht- und Oberflächentechnik IST, welche das Umformungswerkzeug (Stempel) mit den sensorischen Schichten ausgestattet hat. Im Rahmen des Projektes wurden auch Kraftmessscheiben mit mehreren Sensoren zur Messung der bei der Umformung entstehenden Kräfte entwickelt. Diese sollen insbesondere eine Verkippung des Stempels oder fehlerhafte Rohlinge erkennen.

 

Hahn-Schickard-Gesellschaft für Angewandte Forschung e.V. (HS), welche eine Embedded Elektronik zur Erfassung, Auswertung und Übertragung der Messdaten über eine USB-Schnittstelle bzw. drahtlos per Bluetooth LE entwickelt hat. Eine besondere Herausforderung haben die teilweise sehr hochohmigen Sensoren dargestellt, da deren Signale leicht durch Elektromagnetische Strahlung, wie sie in solch schweren Maschinen üblich sind, gestört werden. Für die Visualisierung der Messdaten wurde eine PC-Anwendung erstellt.

 

Die Langfassung des Abschlussberichtes kann bei der FSV, Goldene Pforte 1, 58093 Hagen, angefordert werden

2023

Analysis of tool heating in cold forging using thin-film sensors

K. C. Grötzinger, A. Schott, M. Rekowski, B. Ehrbrecht, T. Hehn, D. Gerasimov, M. Liewald

International ESAFORM Conference, 19.-21. April 2023, Krakow, Poland

Kurzfassung einblenden

Data acquisition and data analysis to gain a better process understanding are one of the most promising trends in manufacturing technology. Especially in cold forging processes, data acquisition close to the deformation zone seems challenging due to the high surface pressure. Thus far, process parameters such as die temperature are mainly measured with state-of-the-art sensors, including standard thermocouples, which are integrated into the tooling system. The application of thin-film sensors has been tested in hot forging processes for local die temperature measurement. However, the process conditions regarding tribology and tool load in cold forging are even more difficult. In this contribution, the use of thin-film sensors, applied on a cold forging punch for cup backward extrusion, is subjected. The aim is to investigate the applicability of such thin-film sensors in cold forging with special emphasis on temperature measurement in cyclic forming processes. The thin-film sensor system and its manufacturing procedure by vacuum coating technology combined with microstructuring are described. With these thin-film sensors the cup backward cold extrusion of steel billets was investigated experimentally. Cyclic tool heating simulations with thermal parameter variations were performed as a reference to
experimental results.

 

Link to paper

2023

Comparison of non-pulsating reflective PPG signals in skin phantom, wearable device prototype, and Monte Carlo simulations

M. Reiser, T. Müller, K. Flock, O. Amft, A. Breidenassel

45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 24. – 27.07.2023, Sydney, Australia

Kurzfassung einblenden

We obtain and compare the non-pulsating part of reflective Photoplethysmogram (PPG) measurements in a porcine skin phantom and a wearable device prototype with Monte Carlo simulations and analyse the received signal. In particular, we investigate typical PPG wavelengths at 520, 637 and 940 nm and source-detector distances between 1.5 and 8.0 mm. We detail the phantom’s optical parameters, the wearable device design, and the simulation setup. Monte Carlo simulations were using layer-based and voxel-based structures. Pattern of the detected photon weights showed comparable trends. PPG signal, differential pathlength factor (DPF), mean maximum penetration depth, and signal level showed dependencies on the source-detector distance d for all wavelengths.We demonstrate the signal dependence of emitter and detection angles, which is of interest for the development of wearables. 

2023

„EDIH Südwest“ – Beratungs- und Technologieangebote zur digitalen Transformation

S. Spieth

14. InnovationForum Smarte Technologien & Systeme, 15. Juni 2023, Donaueschingen

2023

„EDIH Südwest“ – Beratungs- und Technologieangebote zur digitalen Transformation von Unternehmen und Verwaltung

S. Spieth

microTEC Südwest Clusterkonferenz 2023, 15.-16. Mai 2023, Freiburg

2023

Multi-scale Bowel Sound Event Spotting in Highly Imbalanced Wearable Monitoring Data: Algorithm Development and Validation

A. Baronetto, L. Graf, S. Fischer, M. Neurath, O. Amft

JMIR Preprints, doi: 10.2196/preprints.51118

Kurzfassung einblenden

Background:

Abdominal auscultation, i.e. listening to Bowel Sounds (BS), can be used to analyse digestion. An automated retrieval of BS would be beneficial to assess gastro-intestinal disorders non-invasively.

Objective:

To develop a multi-scale spotting model to detect BS in continuous audio data from a wearable monitoring system.

Methods:

We designed a spotting model based on Efficient-U-Net (EffUNet) architecture to analyse 10-second audio segments at a time and spot BS with a temporal resolution of 25 ms. Evaluation data was collected across different digestive phases from 18 healthy participants and 9 patients with Inflammatory Bowel Disease (IBD). Audio data were recorded in a daytime setting with a T-Shirt that embeds digital microphones. The dataset was annotated by independent raters with substantial agreement (Cohen’s κ between 0.70 and 0.75), resulting in 136 h of labelled data. In total, 11482 BS were analysed, with BS duration ranging between 18 ms and 6.3 s. The share of BS in the dataset (BS ratio) was 0.89%. We analysed performance depending on noise level, BS duration, and BS event rate, as well as report spotting timing errors.

Results:

Leave-One-Participant-Out cross-validation of BS event spotting yielded a median F1 score of 0.73 for both, healthy volunteers and patients. EffUNet detected BS in different noise conditions with 0.73 recall and 0.72 precision. In particular, for SNR > 4 dB, more than 83% of BS were recognised, with precision ≥ 0.77. EffUNet recall dropped below 0.60 for BS duration ≥ 1.5 s. At BS ratio > 5%, our model precision was > 0.83. For both healthy participants and patients, insertion and deletion timing errors were the largest, with a total of 15.54 min insertion errors and 13.08 min of deletion errors over the total audio dataset. On our dataset, EffUNet outperform existing BS spotting models that provide similar temporal resolution.

Conclusions:

The EffUNet spotter is robust against background noise and can retrieve BS with varying duration. EffUNet outperforms previous BS detection approaches in unmodified audio data, containing highly sparse BS events.

 

Link to publication

2023

Segment-based Spotting of Bowel Sounds using Pretrained Models in Continuous Data Streams

A. Baronetto, L. Graf, S. Fischer, M. Neurath, O. Amft

IEEE Journal of Biomedical and Health Informatics, 3164 - 3174, doi: 10.1109/JBHI.2023.3269910

Kurzfassung einblenden

We analyse pretrained and non-pretrained deep neural models to detect 10-seconds Bowel Sounds (BS) audio segments in continuous audio data streams. The models include MobileNet, EfficientNet, and Distilled Transformer architectures. Models were initially trained on AudioSet and then transferred and evaluated on 84 hours of labelled audio data of eighteen healthy participants. Evaluation data was recorded in a semi-naturalistic daytime setting including movement and background noise using a smart shirt with embedded microphones. The collected dataset was annotated for individual BS events by two independent raters with substantial agreement (Cohen’s Kappa κ = 0.74). Leave-One-Participant-Out cross-validation for detecting 10-second BS audio segments, i.e. segment-based BS spotting, yielded a best F1 score of 73% and 67%, with and without transfer learning respectively. The best model for segment-based BS spotting was EfficientNet-B2 with an attention module. Our results show that pretrained models could improve F1 score up to 26%, in particular, increasing robustness against background noise. Our segment-based BS spotting approach reduces the amount of audio data to be reviewed by experts from 84 h to 11 h, thus by ∼87%.

 

Link to publication