Στην βιολογία, το περιβάλλον μπορεί να καθοριστεί σαν ενα σύνολο κλιματικών, βιοτικών, κοινωνικών και εδαφικών παραγόντων που δρουν σε έναν οργανισμό και καθορίζουν την ανάπτυξη και την επιβίωση του. Έτσι, περιλαμβάνει οτιδήποτε μπορεί να επηρεάσει άμεσα τον μεταβολισμό ή τη συμπεριφορά των ζωντανών οργανισμών ή ειδών, όπως το φως, ο αέρας, το νερό, το έδαφος και άλλοι παράγοντες. Δείτε επίσης το άρθρο για το φυσικό περιβάλλον και τη φυσική επιλογή.
Στην αρχιτεκτονική, την εργονομία και την ασφάλεια στην εργασία, περιβάλλον είναι το σύνολο των χαρακτηριστικών ενός δωματίου ή κτιρίου που επηρεάζουν την ποιότητα ζωής και την αποδοτικότητα, περιλαμβανομένων των διαστάσεων και της διαρρύθμισης των χώρων διαβίωσης και της επίπλωσης, του φωτισμού, του αερισμού, της θερμοκρασίας, του θορύβου κλπ. Επίσης μπορεί να αναφέρεται στο σύνολο των δομικών κατασκευών. Δείτε επίσης το άρθρο για το δομημένο περιβάλλον.
Στην ψυχολογία, περιβαλλοντισμός είναι η θεωρία ότι το περιβάλλον (με τη γενική και κοινωνική έννοια) παίζει μεγαλύτερο ρόλο από την κληρονομικότητα καθορίζοντας την ανάπτυξη ενός ατόμου. Συγκεκριμένα, το περιβάλλον είναι ένας σημαντικός παράγοντας πολλών ψυχολογικών θεωριών.
Στην τέχνη, το περιβάλλον αποτελεί κινητήριο μοχλό και μούσα εμπνέοντας τους ζωγράφους ή τους ποιητές. Σε όλες τις μορφές της Τέχνης αποτελεί έμπνευση και οι Καλές Τέχνες φανερώνουν την επιρροή οπού άσκησε σε όλους τους καλλιτέχνες με όποιο είδος Τέχνης κι αν ασχολούνται. Ο άνθρωπος μέσα στο περιβάλλον δημιουργεί Μουσική, Ζωγραφική, Ποίηση, Γλυπτική, χορό, τραγούδι, θέατρο, αλλά και όλες οι μορφές τέχνης έχουν άμεση έμπνευση από το περιβάλλον.

Τετάρτη 17 Απριλίου 2019

Computer Assisted Radiology and Surgery

Multi-organ segmentation of the head and neck area: an efficient hierarchical neural networks approach

Abstract

Purpose

In radiation therapy, a key step for a successful cancer treatment is image-based treatment planning. One objective of the planning phase is the fast and accurate segmentation of organs at risk and target structures from medical images. However, manual delineation of organs, which is still the gold standard in many clinical environments, is time-consuming and prone to inter-observer variations. Consequently, many automated segmentation methods have been developed.

Methods

In this work, we train two hierarchical 3D neural networks to segment multiple organs at risk in the head and neck area. First, we train a coarse network on size-reduced medical images to locate the organs of interest. Second, a subsequent fine network on full-resolution images is trained for a final accurate segmentation. The proposed method is purely deep learning based; accordingly, no pre-registration or post-processing is required.

Results

The approach has been applied on a publicly available computed tomography dataset, created for the MICCAI 2015 Auto-Segmentation challenge. In an extensive evaluation process, the best configurations for the trained networks have been determined. Compared to the existing methods, the presented approach shows state-of-the-art performance for the segmentation of seven different structures in the head and neck area.

Conclusion

We conclude that 3D neural networks outperform the most existing model- and atlas-based methods for the segmentation of organs at risk in the head and neck area. The ease of use, high accuracy and the test time efficiency of the method make it promising for image-based treatment planning in clinical practice.



A novel image-based retrieval system for characterization of maxillofacial lesions in cone beam CT images

Abstract

Purpose

The objective of medical content-based image retrieval (CBIR) is to assist clinicians in decision making by retrieving the most similar cases to a given query image from a large database. Herein, a new method for content-based image retrieval of cone beam CT (CBCT) scans is presented.

Methods

The introduced framework consists of two main phases: training database construction and querying. The goal of the training phase is database construction, which consists of three main steps. First, automatic segmentation of lesions using 3D symmetry analysis is performed. Embedding the prior shape knowledge of the 3D symmetry characteristics of the healthy human head structure increases the accuracy of automatic segmentation. Then, spatial pyramid matching is used for feature extraction, and the relative importance of each feature is learned using classifiers.

Results

The method was applied to a dataset of 1145 volumetric CBCT images with four classes of maxillofacial lesions. A symmetry-based analysis model for automatic lesion segmentation was evaluated using similarity measures. Mean Dice coefficients of 0.89, 0.85, 0.92, and 0.87 were achieved for maxillary sinus perforation, radiolucent lesion, unerupted tooth, and root fracture classes, respectively. Moreover, the execution time of automatic segmentation was reduced to 3 min per case. The performance of the proposed search engine was evaluated using mean average precision and normalized discounted cumulative gain. A mean average retrieval accuracy and normalized discounted cumulative gain of 0.90 and 0.92, respectively, were achieved.

Conclusion

Quantitative results show that the proposed approach is more effective than previous methods in the literature, and it can facilitate the introduction of CBIR in clinical CBCT applications.



EyeSAM: graph-based localization and mapping of retinal vasculature during intraocular microsurgery

Abstract

Purpose

Robot-assisted intraocular microsurgery can improve performance by aiding the surgeon in operating on delicate micron-scale anatomical structures of the eye. In order to account for the eyeball motion that is typical in intraocular surgery, there is a need for fast and accurate algorithms that map the retinal vasculature and localize the retina with respect to the microscope.

Methods

This work extends our previous work by a graph-based SLAM formulation using a sparse incremental smoothing and mapping (iSAM) algorithm.

Results

The resulting technique, "EyeSAM," performs SLAM for intraoperative vitreoretinal surgical use while avoiding spurious duplication of structures as with the previous simpler technique. The technique also yields reduction in average pixel error in the camera motion estimation.

Conclusions

This work provides techniques to improve intraoperative tracking of retinal vasculature by handling loop closures and achieving increased robustness to quick shaky motions and drift due to uncertainties in the motion estimation.



Interactive patient-customized curvilinear reformatting for improving neurosurgical planning

Abstract

Purpose

Visualizing a brain in its native space plays an essential role during neurosurgical planning because it allows the superficial cerebral veins and surrounding regions to be preserved. This paper describes the use of a visualization tool in which single gadolinium contrast-enhanced T1-weighted magnetic resonance imaging was applied in nondefective and nonresective skulls to promote visualization of important structures.

Methods

A curvilinear reformatting tool was applied on the supratentorial compartment to peel the tissues to the depth of the dura mater and thereby revealing cortical and vascular spatial relationships. The major advantage of our proposed tool is that it does not require coregistration of anatomical and vascular volumes.

Results

The reliability of this technique was supported by comparisons between preoperative images and digital photographs of the brain cortical surface obtained after the dura mater was removed in 20 patients who underwent surgery in the Clinics Hospital of the University of Campinas from January 2017 to April 2018.

Conclusion

Single fat-suppressed GAD contrast-enhanced T1-weighted magnetic resonance scans provide accurate preoperative 3D views of cortical and vascular relationships similar to neurosurgeons' intraoperative views. In developing countries with limited access to state-of-the-art health technologies, this imaging approach may improve the safety of complex neurosurgeries.



A direct volume rendering visualization approach for serial PET–CT scans that preserves anatomical consistency

Abstract

Purpose

Our aim was to develop an interactive 3D direct volume rendering (DVR) visualization solution to interpret and analyze complex, serial multi-modality imaging datasets from positron emission tomography–computed tomography (PET–CT).

Methods

Our approach uses: (i) a serial transfer function (TF) optimization to automatically depict particular regions of interest (ROIs) over serial datasets with consistent anatomical structures; (ii) integration of a serial segmentation algorithm to interactively identify and track ROIs on PET; and (iii) parallel graphics processing unit (GPU) implementation for interactive visualization.

Results

Our DVR visualization more easily identifies changes in ROIs in serial scans in an automated fashion and parallel GPU computation which enables interactive visualization.

Conclusions

Our approach provides a rapid 3D visualization of relevant ROIs over multiple scans, and we suggest that it can be used as an adjunct to conventional 2D viewing software from scanner vendors.



Missed paranasal sinus compartments in sinus surgery with and without image-guidance systems: a pilot feasibility study

Abstract

Purpose

Image-guidance systems (IGS) have gained widespread use in endoscopic sinus surgery (ESS) and have been thoroughly analysed. In this study, we looked for a new parameter to determine if patients could directly benefit from the use of IGS during primary ESS. We questioned if IGS could improve the quality of ESS in chronic rhinosinusitis (CRS) patients via allowing a more comprehensive treatment of all involved sinus compartments.

Methods

In a pilot feasibility study, we evaluated uncomplicated CRS patients following primary ESS with and without IGS between January 2011 and June 2012 using preoperative and postoperative CT scans. The preoperative CT scans identified the sinus compartments requiring surgery. The postoperative CT scans were used to evaluate the treatment effect in these compartments. From these data, we calculated a missing ratio (missed compartments/compartments requiring surgery) for each patient.

Results

Of the 169 ESS patients who were treated, ten patients were retrospectively identified as complying with the inclusion and exclusion criteria following ESS with IGS. Ten patients treated without IGS were then randomly chosen. The median missing ratio for non-IGS patients was 36%, and for IGS patients, the median missing ratio was 0% (p = 0.046). However, the missing ratio was depended on the number of compartments requiring surgery. Stratification of the number of compartments requiring surgery resulted in an exact p value of 0.13.

Conclusions

IGS could help the surgeon to more completely address diseased sinus compartments. For better scientific merit, a comparative study of ESS with and without IGS seems feasible, using the proposed failing score missed compartments/compartments requiring surgery as the primary outcome parameter.



Random forest classifiers aid in the detection of incidental osteoblastic osseous metastases in DEXA studies

Abstract

Purpose

Dual-energy X-ray absorptiometry (DEXA) studies are used for screening patients for low bone mineral density (BMD). Patients with breast and prostate cancer are often treated with hormone-altering drugs that result in low BMD. These patients may have incidental osteoblastic metastases of the spine that may be detected on screening DEXA studies. The aim of this pilot study is to assess whether random forest classifiers or support vector machines can identify patients with incidental osteoblastic metastases of the spine from screening DEXA studies and to evaluate which technique is better.

Methods

We retrospectively reviewed the DEXA studies from 200 patients (155 normal control patients and 45 patients with osteoblastic metastases of one or more lumbar vertebral bodies from L1 to L4). The dataset was split into training (80%) and validation (20%) datasets. The optimal random forest (RF) and support vector machine (SVM) classifiers were obtained. Receiver-operator-characteristic curves were compared with DeLong's test.

Results

The sensitivity, specificity, accuracy and area under the curve (AUC) of the optimal RF classifier were 77.8%, 100.0%, 98.0% and 0.889, respectively, in the validation dataset. The sensitivity, specificity, accuracy and AUC of the optimal SVM classifier were 33.3%, 96.8%, 82.5% and 0.651 in the validation dataset. The RF classifier was significantly better than the SVM classifier (P = 0.008). Only 7 of the 45 patients with osteoblastic metastases (15.6%) were prospectively identified by the radiologist interpreting the study.

Conclusions

RF classifiers can be used as a useful adjunct to identify incidental lumbar spine osteoblastic metastases in screening DEXA studies.



A novel multiple communication paths for surgical telepresence videos delivery of the maxilla area in oral and maxillofacial surgery

Abstract

Purpose

A surgical telepresence between two surgical sites where a local surgeon in the surgery site, who is less experienced, needs help from the expert surgeon located at a remote site. Furthermore, the primary aim of this paper is to improve the quality of surgical video sent and received to-and-from both surgical sites, which has been a major quality issue so far.

Method

This work considers flow rate allocation and resource availability to determine the network path quality. Furthermore, a segmented backup path is used to provide a timely recovery in case of failure in the link. A neighbour detection technique in segmented backup is used to reduce the detection latency of the network in case of link failure.

Results

The results depict that the proposed system improves the quality of the surgical video by an average of 5.5 db over the current system. Furthermore, the neighbour detection technique detects the network failure 40–45% faster than the currently used end-to-end detection system. The experimental results have done on the maxilla areas in oral and maxillofacial surgery.

Conclusion

The proposed system concentrates on reducing the network failure detection latency and improves the received and sent video quality by using an enhanced path quality technique. Thus, this study enhances the video quality and provides a backup option in case of failure, which offers timely recovery for communication between two surgeons.



A practical marker-less image registration method for augmented reality oral and maxillofacial surgery

Abstract

Background

Image registration lies in the core of augmented reality (AR), which aligns the virtual scene with the reality. In AR surgical navigation, the performance of image registration is vital to the surgical outcome.

Methods

This paper presents a practical marker-less image registration method for AR-guided oral and maxillofacial surgery where a virtual scene is generated and mixed with reality to guide surgical operation or provide surgical outcome visualization in the manner of video see-through overlay. An intraoral 3D scanner is employed to acquire the patient's teeth shape model intraoperatively. The shape model is then registered with a custom-made stereo camera system using a novel 3D stereo matching algorithm and with the patient's CT-derived 3D model using an iterative closest point scheme, respectively. By leveraging the intraoral 3D scanner, the CT space and the stereo camera space are associated so that surrounding anatomical models and virtual implants could be overlaid on the camera's view to achieve AR surgical navigation.

Results

Jaw phantom experiments were performed to evaluate the target registration error of the overlay, which yielded an average error of less than 0.50 mm with the time cost less than 0.5 s. Volunteer trial was also conducted to show the clinical feasibility.

Conclusions

The proposed registration method does not rely on any external fiducial markers attached to the patient. It performs automatically so as to maintain a correct AR scene, overcoming the misalignment difficulty caused by patient's movement. Therefore, it is noninvasive and practical in oral and maxillofacial surgery.



Subject-specific modelling of pneumoperitoneum: model implementation, validation and human feasibility assessment

Abstract

Purpose

The aim of this study is to propose a model that simulates patient-specific anatomical changes resulting from pneumoperitoneum, using preoperative data as input. The framework can assist the surgeon through a real-time visualisation and interaction with the model. Such could further facilitate surgical planning preoperatively, by defining a surgical strategy, and intraoperatively to estimate port positions.

Methods

The biomechanical model that simulates pneumoperitoneum was implemented within the GPU-accelerated NVIDIA FleX position-based dynamics framework. Datasets of multiple porcine subjects before and after abdominal insufflation were used to generate, calibrate and validate the model. The feasibility of modelling pneumoperitoneum in human subjects was assessed by comparing distances between specific landmarks from a patient abdominal wall, to the same landmark measurements on the simulated model.

Results

The calibration of simulation parameters resulted in a successful estimation of an optimal set parameters. A correspondence between the simulation pressure parameter and the experimental insufflation pressure was determined. The simulation of pneumoperitoneum in a porcine subject resulted in a mean Hausdorff distance error of 5–6 mm. Feasibility of modelling pneumoperitoneum in humans was successfully demonstrated.

Conclusion

Simulation of pneumoperitoneum provides an accurate subject-specific 3D model of the inflated abdomen, which is a more realistic representation of the intraoperative scenario when compared to preoperative imaging alone. The simulation results in a stable and interactive framework that performs in real time, and supports patient-specific data, which can assist in surgical planning.



Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου