There are different staining techniques, each of which is specifically suited for particular experiments, and selecting the appropriate method is crucial to ensure a proper acquisition Parekh and Ascoli, After any of these chemical staining processes, microscopes are able to capture the neuron morphology, including the somata, dendrites, and axons. Modern techniques such as multiphoton microscopy Zipfel et al. From these image stacks, it is possible to trace the 3D contour or path of the main components of each neuron to digitally reconstruct the neuron morphology in a manual Glaser and Glaser, , semiautomatic Oaks et al.
The extracted shape and the placement accuracy of the morphological points traced along the neuron contour or path are highly dependent not only on the quality of the obtained image stacks but also on the expertise of the human operator, who is in charge of placing the morphological points within the neurites by manually clicking with a mouse, or by setting the parameters in semi- automated algorithms.
The morphological tracing procedures described above define a unique tree structure, with a root node at the cell soma and an ordered sequence of interconnected nodes that defines segments within the original shape of the neurites, also including the neurite thickness at each morphological point.
Unlike the neurite tracings, a detailed description of the soma is not commonly stored. Typically, the data stored in these tracings include a unique morphological point placed at the soma center plus the average soma radius, or a set of connected points tracing the 2D projection of the soma contour from a specific point of view which is clearly not valid for other points of view. Although tracings provide essential information, one limitation when visualizing them is that it is not possible to perceive the neurite thickness.
Visualizing their corresponding 3D meshes improves the spatial perception, allowing the user to better perceive relations and how the different neurites relate to one another spatially including their proximity. Also, the neurite thickness and volume are immediately perceived, and a 3D shape of the soma can be viewed. In addition, having a 3D mesh makes it possible to associate values with the neural membrane. The 3D visualization of digitized neurons presents some problems. If mesh-based methods are to be used for rendering, it is necessary to generate meshes with enough resolution to capture fine detail.
This is a common problem in many 3D computer graphics applications, and perhaps the main issue in this case is that the number of neurons to be displayed can increase above any prespecified limit such as in large-scale simulations using detailed geometric models for neurons. This imposes additional scalability restrictions when attempting to come up with practical solutions. In addition, it is common to find that publicly available collections of 3D neuron reconstructions do not have complete geometrical descriptions, which are necessary to generate the meshes from the available data.
In the case of Neurolucida, the soma is approximated with a 2D disk, which is not even saved when exporting the 3D model. More recently approximations such as the toolbox Py3DN Aguiar et al. In this case, the tool adapts a set of successive overlapped planes that are generated taking into account the dendritic initial points. However, this toolbox does not connect the generated soma with the dendrites either. Other methods such as Lasserre et al.
- How to Be a Woman.
- Mmg Platform – Page 13 – Upgrade your meshes.
- Grid generation methods (eBook, ) [awolylubibyw.tk]!
- The Gay Rights Question in Contemporary American Law?
This method starts from a sphere made with quads with a fixed resolution, where the dendrites are generated by quad-extrusion starting from the soma. At the end of the method, a Catmull—Clark subdivision smooths the whole mesh, generating realistic, smooth, and closed meshes. Nonetheless, due to the fixed initial soma geometry, the final shape of the obtained soma continues to be too spherical. Neuronize Brito et al. Neuronize generates not only a realistic soma but also a good approximation of important morphological parameters such as the soma volume and area.
However, due to the versatility of the mass-spring system, this generation may require complicated fine-tuning of several simulation parameters to achieve an accurate soma reconstruction. The visualization of complex neuronal scenes requires special techniques for managing the intricate scene geometry. Multiresolution approaches Clark, ; Luebke et al. This approach has been followed in methods for neural membrane CPU mesh generation with different levels of detail Brito et al.
In neuronal scenes, it is not possible to store all the representations in the graphic card memory, due to i the vast number of neurons and their complex morphology and ii to the constant data transfer from the main memory that are required. Alternative multiresolution techniques De Floriani et al. This data structure can be queried at runtime to extract a simplified mesh fulfilling some defined restrictions.
Another approximation is Progressive Meshes Hoppe et al. These two approximations result in substantial CPU loads to traverse the triangulation and, at the same time, large memory requirements to store the highly detailed meshes. These issues hamper their applicability to complex neuronal scenes.
The problem with this approximation is that the refinement process generates a huge number of primitives that need to be sent to the graphic card, generating large data transfer and bus bottlenecks. To avoid this, some of these algorithms are being deployed directly in GPUs, such as a hardware evaluation of the Catmull—Clark schemes, proposed in Shiue et al. Also a procedural displacement is performed over the new generated vertices, making use of local information in each of the patches being processed, using triangles Boubekeur and Schlick, or quadruples quads Guthe et al.
This last approximation provides a better performance than Curved Point Normal Triangles Boschiroli et al.
- Regulation A+: How the JOBS Act Creates Opportunities for Entrepreneurs and Investors?
- دانلود کتاب نسل مش، نسخه دوم.
- 2nd AIAA Geometry and Mesh Generation Workshop – Design Methods;
- Product description?
- Hayao Miyazaki: Japans Premier Anime Storyteller.
- Navigation Bar;
As a consequence of these problems, new stages have been included in the classical GPU pipeline, making the GPU more programmable and avoiding the need for storing each newly generated vertex in the graphic card memory. However, this stage usually significantly slows down the pipeline, if it needs to manage a large number of geometry primitives Schnetter et al.
To avoid this problem, new generations of GPUs have added new stages into the classical pipeline to facilitate tessellation tasks. Thus, the pipeline is expanded allowing users to have a precise and easier control of the geometry, making it possible to manage the level of detail of the desired models directly on the GPU. The mentioned techniques are oriented toward objects of generic shapes. For neuroscience data, the objects to be modeled have a number of specific features that should be considered early in the technical design process, because they are key for optimizing method performance when dealing with scenes composed of large numbers of neurons.
Some examples of how this can be done are presented in the following sections. The main goal of the techniques presented here is the design of efficient representations that:. To accomplish this goal, this paper proposes a set of techniques that can be grouped into two modules Figure 1 : the first module takes as input any existing morphological tracings from possibly real neurons and generates a coarse, low-poly 3D mesh, together with some additional information which allows the mesh to be used by any application capable of representing 3D meshes.
The second module takes the coarse generated mesh with the additional information and performs a view-dependent or other criterion-dependent refinement to render it at dynamically adaptive levels of detail LOD.
An Introduction to Unstructured Mesh Generation Methods and Softwares for Scientific Computing
The following sections describe these modules in detail. Figure 1. Method overview: generation and refinement modules. From the morphological tracing, a coarse mesh is generated with additional information. A dynamically adaptive LOD refinement is then applied to the coarse mesh for its real-time visualization. The goal of this module is to generate an initial low-poly mesh that approximates the whole neuron. As previously mentioned, this method is based on existing morphological tracings such as those stored in NeuroMorpho.
These tracings usually describe the soma and neurites in different ways: neurites are described by polylines that trace their trajectories, while somata are often barely described by a 2D contour at most. Therefore, the strategies applied for the mesh generation of these two structures are different, obtaining separated meshes that are merged in a final step.
Regarding somata, the solution presented here for their generation is an improvement on the approach proposed in Neuronize Brito et al. The underlying idea is to select an initial simple shape in our case, a sphere and simulate the physical deformation this sphere would undergo in the hypothetical case that the neurites attracted the sphere surface toward them, generating an elastic deformation of the sphere.
Since FEM works on volumetric models, the first step to obtain each soma is to create a volumetric representation of an icosphere. Hence, based on the soma center and radius as provided by the morphological tracings, a tetrahedral mesh is built Figure 2 ; top. This volumetric mesh is taken as the initial equilibrium state for an elastic deformation process. The external faces of the tetrahedra form a triangular mesh that represents the surface of the icosphere. However, since quads are more suitable than triangles for subsequent steps of our method, pairs of adjacent triangles are merged into quads.
(PDF) Grid generation methods scientific computation | Roshan Rangarajan - awolylubibyw.tk
Afterward, the surface quads closest to each neurite are selected, and their vertices are pulled toward the neurite insertion point in the soma. Finally, applying a static linear FEM, the mesh is deformed until the final shape of the neuron soma is generated Figure 2 ; bottom. Figure 2. Top images: initial tetrahedral icosphere used in the soma generation process. View of its surface left image and of its internal structure right image. Bottom images: FEM deformation process. Figure 3. FEM deformation. Regarding neurites, the soma quads which are already positioned at the beginning of each neurite define the initial section of their respective neurite.
Share your voice
Next, each initial section is extruded between each pair of neurite tracing points, following the neurite trajectory, to approximate the tubular structure of the neurite membrane. In addition, since the changes in neurite directions occur at the traced morphological points, an orientation vector is computed at each one of these points to re-orient the quad section associated with each morphological point.
Three different cases can be distinguished for the computation of the orientation vectors: one associated with standard tracing points points that have only one child , one with bifurcation or fork joint tracing points points that have two children , and one with ending tracing points points that do not have any children.
The orientation vector, o , of a standard tracing point is the result of adding the vectors r 0 and r 1 and normalizing the resulting vector Figure 4 ; left , where r 0 is the unit vector indicating the direction between the parent tracing point and the current tracing point and r 1 indicates the direction between the current tracing point and the child tracing point. Figure 4. Calculating the orientation vector according to the different types of tracing points. The current point is represented with a green sphere. Left image: standard tracing point. Middle image: bifurcation tracing point.
Right image: ending tracing point. Computing the orientation vector, o , at the bifurcation tracing point is performed in a similar way, but in this case, the unitary vectors r 1 and r 2 give the directions defined by the current tracing point and each of its children consequently, o , does not depend on the parent segment orientation vector Figure 4 ; middle. Finally, at ending nodes, the orientation vector, o , is equal to the unit vector r 0 Figure 4 ; right. Once the orientation vectors have been computed, a section-quad is positioned at each tracing point, oriented according to its orientation vector computed as described above.
The section-quad is also scaled according to the radius of the tracing point. In the case of a bifurcation, an extra vertex is introduced to facilitate the stitching of these branches, placing this new vertex at a distance equal to the radius of the bifurcation tracing point in the direction of o , its orientation vector Figure 5 ; left.
Figure 5. Once the section-quads for the whole neurite have been generated, all the vertices are connected to each other to obtain the neurite quad mesh. The quads connecting these vertices are called lateral quads, to distinguish them from the previously mentioned section-quads.
There are two special cases that must be dealt with by this process. A plane containing the two children tracing points and the bifurcation point is created. Second, the connection at the ending tracing points, where the 4 vertices of its section-quad do not need to be connected to any other vertex, is carried out by connecting these vertices to each other through another lateral quad.
https://voiscaphotilar.ga Once the coarse neurites and the soma have been generated, their union is straightforward. This is because the first section-quad of a neurite used for its extrusion was itself a soma-quad indicating the neurite starting point. In addition to this base mesh, some additional information is kept to guide the subsequent refinement step. Specifically, each vertex of the coarse mesh keeps track of its associated tracing point. As a result, at each vertex, the position, radius, and orientation vector of its associated tracing point can be accessed in the following stage.
The goal of this module is the generation of higher resolution meshes that yield better approximations of the neuron membrane, by building upon the initial coarse mesh obtained as described above. Regarding somata, their resolution is defined by the resolution of the initial icosphere that is subsequently deformed. With respect to the neurites, they undergo on-the-fly refinement procedures, which take advantage of the hardware tessellation capabilities tessellation shaders supported by OpenGL from version 4. The tessellation process takes each input patch and subdivides it by computing new vertices together with their associated attributes Shreiner et al.
This tessellation stage is further decomposed into three substages. The first substage, the Tessellation Control Shader, determines the number of subdivisions i. The second substage, the Tessellation Primitive Generator, takes as inputs the patch and the subdivision levels defined in the previous substage and subdivides the original patch accordingly. Finally, the third substage, the Tessellation Evaluation Shader, computes the attributes of each new vertex generated by the previous substage, such as vertex positions and so on.
It should be noted that since only the first and the third substages are user programmable, the present method only needs to compute the number of subdivision levels and the attributes of the newly generated vertices. Homogeneous refinements can be reached by setting the same subdivision levels for all the object patches. However, given the overwhelming geometric complexity present in regular neurons, the use of adaptive levels of detail is recommended, allowing the neurites closer to the camera to be refined while the detail for distant areas is kept lower. This distance to the camera can be encoded as a generic importance value associated to each tracing point, and this value could also be used to encode criteria other than distance.
Since each vertex keeps track of its associated tracing point, assigning importance values to the tracing points is analogous to labeling each vertex with an importance value. As mentioned above, the first refinement step involves determining the subdivision levels for each patch it should be noted that each lateral quad of the coarse mesh will be taken as a single patch.
To define the subdivision pattern, two different levels must be taken into consideration: an outer subdivision level and an inner subdivision level. The outer subdivision level determines the number of subdivisions at each edge, requiring therefore four parameters in the case of a quad patch one for each edge. The inner subdivision level determines the number of subdivisions in each edge direction longitudinal and traversal , requiring therefore two more parameters Shreiner et al.
Since these levels are set according to the importance of the vertices, discontinuities can occur whenever the importance values of adjacent vertices are very different. For this reason, the outer subdivision level of each edge is computed as a weighted sum of the importance of its two vertices.
In addition, both inner subdivision levels have the same value, obtained also as a weighted sum of the importance of the four vertices of the quad. This way of determining the subdivision levels avoids discontinuities on the refined mesh; Figure 6 illustrates a mesh refinement operation that does not prevent discontinuities, which clearly contrasts with the results obtained with the proposed solution, where no discontinuities are created. Note that this method prevents the appearance of discontinuities not only along the neurites but also at the refined neurite-soma connections.
Finally, once the subdivisions levels have been defined, the second substage the Tessellation Primitive Generator can divide each original patch accordingly. Figure 6. Discontinuities on a refined mesh. The left image shows a refined mesh with discontinuities caused by the difference in contiguous subdivision levels.
The right image shows the same mesh refined with our method, in which there are no discontinuities. Finally, the third substage, the Tessellation Evaluation Shader, must compute the position of each new vertex generated by the previous substage. These new vertices are initially positioned on the quad plane to which they belong, so that their final positions are calculated from the homogeneous tessellation coordinates generated in the previous stage: x, the transversal coordinate and y, the longitudinal coordinate.
In our specific case, each new vertex of the patch needs to be displaced to approximate a cylinder, which is the best approximation to the neurite cross section that can be obtained with the available data. In the case of vertices that lie within a section-quad centered at a tracing point , this operation can be easily performed by displacing each vertex. The displacement magnitude for each vertex should be equal to the radius associated with the tracing point, with the displacement performed in a radial direction from the tracing point.
However, the new vertices that do not lie in a section-quad require the computation of a point along the neurite trajectory that behaves as a center point from which the radial directions originate. This process is outlined in the following paragraphs. Figure 7 presents a portion of a coarse mesh, where a set of four lateral quads represents the union between two morphological tracing points only one lateral quad, in purple, is depicted.
The first two vertices of each lateral quad, v0 and v1, correspond to the first tracing point, t 0 , of a tracing segment, and the last two vertices, v2 and v3, correspond to the second tracing point, t 1 , of that segment. Because of this, the position of the center, the radius, and the orientation vector associated with the first two vertices of the lateral quad are those of t 0 , while the values of t 1 are associated with the last two vertices of the lateral quad.
Figure 7. Correspondence between the four lateral quad vertices and their two corresponding morphological tracing points. For any new vertex, the position of its associated center, as well as the direction and module of the displacement, which will be applied to that vertex, are calculated based on i the information of the four vertices of the lateral quad to be tessellated and ii the parameters of the two tracing points associated with these four vertices. Therefore, the position of the center associated with any new vertex could be easily computed along the segments that define the neurite trajectory; however, the neuritic paths can be smoothed by interpolating the tracing points with a cubic Hermite spline function.
In this case, the position of the center point, c , will be computed according to the expression:. Figure 8 shows the original path of a neurite and the path smoothed using a cubic Hermite spline. Figure 8. Top image: original neurite path. Bottom image: smoothed path using cubic Hermite spline functions. This basic formulation of cubic Hermite splines can produce undesired loops when abrupt changes in the orientation vectors of two adjacent tracing points occur.
To avoid these artifacts, the module of the orientation vector can be modified, taking into account the distance between the two tracing points of the segment. Figure 9 shows the effects of this improvement. Figure 9. Left image: resulting path when a fixed module for the orientation vectors is maintained. Right image: resulting path when an adaptive module is applied. Once the center of the new vertex, c , is calculated, the direction of the displacement, n , is calculated by performing a bilinear interpolation of the normals of the four vertices of the lateral quad, where these normals represent the radial directions from their associated tracing points.
The module of the displacement, r , will also be computed by interpolating the radii of the first and second tracing points, r 0 and r 1. Hence, the position of the new vertex, v, is calculated using the following expression, as can be seen in Figure 10 :. Figure Based on the information associated with the four vertices of the lateral quad and the two corresponding morphological tracing points, the center, c , the normal, n, and the displacement, r , are calculated to obtain the position of the new vertex, v.
This paper presents a technique for generating 3D mesh neuron models based on standard, widely used morphological tracings, such as those available in public repositories. The method approximates the cell bodies and the dendritic and axonal arbors in independent procedures that are later merged, resulting in closed surfaces that approximate whole neurons. As described in the previous section, a coarse mesh is the starting point for the method, which dynamically applies subsequent refinement processes to adaptively smooth and improve the quality of the 3D approximation of the cell membrane.
This initial coarse mesh presents some desirable properties that make it suitable for visualization and simulation purposes, such as being closed and 2D-manifold. It should be noted that the techniques applied during the mesh generation process guarantee that the traced dendritic and axonal trajectories are preserved, also providing a plausible reconstruction of the soma, specifically built for each cell.
This soma reconstruction process is able to recover information that was not recorded when the neuron was traced, which is often the case in existing data repositories. The following subsections present an evaluation of the quality of the generated meshes and a performance analysis in terms of memory and rendering time. In this paper, the original 3D shape of somata is approximated through the deformation of initial spheres, taking into account the anatomy of the dendrites and axon.
An initial version of the method was proposed in Neuronize Brito et al. Concerning the accuracy of the soma reconstructions and their estimated volume which is of interest for electrophysiological simulations , there were no volume data acquired from digitized neurons, which could serve for quantitative assessment purposes regarding the accuracy of the method.
In addition, measuring the volume of the generated somata can provide some quantitative assessment. For this purpose, soma volumes have been measured and compared with:. These real volumes have been obtained specifically for testing purposes, since they are not usually measured, and have been taken as the ground truth, even though Imaris also introduces volume estimation errors. Figure 11 presents these results visually, while Tables 1 and 2 present them numerically. Table 1 presents the soma volumes using the different methods, and Table 2 presents the Hausdorff Distance mean, maximum, and minimum , as a metric to quantify the distance between the real somata and the generated somata Rockafellar and Wets, As can be seen, the somata generated with the FEM-based method are closer to the somata obtained with Imaris than the somata generated with Neuronize, according to the Hausdorff Distance metric.
In addition, the FEM deformation process returns smoother soma surfaces, avoiding the noisy artifacts that appear on the soma surface when generating it with isosurfaces after thresholding from Imaris and with Neuronize. Regarding soma volume, the values obtained with the FEM method correctly approximate the results obtained with Imaris and Neuronize, and the FEM method is much easier to parameterize than Neuronize. However, given the lack of accurate, ground-truth data, it is not possible to state anything specific other than the impression that the results obtained with the proposed method appeared to be largely compatible with those provided by the other methods considered.
Comparison between the real somata middle images , the meshes generated with Neuronize right images , and the meshes generated with our proposal left images , for three different somata A, B, and C.
Mesh Generation Application to Finite Elements, Second Edition
Table 2. Hausdorff distance mean, maximum, and minimum between the real somata, obtained with Imaris, and the generated somata, using Neuronize and the FEM-based method. The neurite reconstruction process presented in this paper guarantees that the reconstructed neurites preserve the original morphological point positions and diameters, as extracted from the original tracings. This is not only the case for the coarse mesh reconstruction but also for the refined meshes generated on the fly, using a procedure that creates very high resolution meshes with low memory penalties.
The neurite refinement process has been specifically designed for constructing cylindrical shapes from the initial low resolution mesh, since the data available in morphological tracings do not facilitate other approximations for neuron processes beyond those based on generalized cylinders. The reconstructed cylindrical shapes are always crack-free, due to the intrinsic characteristics of the proposed hardware tessellation process, even when the mesh includes sections with different degrees of resolution.
In addition, to increase the visual quality of the generated meshes, the trajectories of the morphological tracings can be smoothed using a spline based technique. In this way, the neurite paths become more even, avoiding abrupt trajectory changes that are not found in biological samples but that are created during the morphology acquisition process, as can be seen it Figure It should be noted that, even after smoothing, the original morphological points of the neuronal tracings are always maintained.
Trajectory smoothing. Top image: refinement method applied to dendrites without any trajectory smoothing.
Bottom image: refinement applied to the same dendrites using the Hermite spline-based method proposed in this paper. After generating the different neuron component meshes, they need to be connected to assemble the whole modeled neuron. The connection strategies used here were designed for providing neurites with smooth and continuous meshes, taking special care with the connections at neurite bifurcations and at the soma.
The method presented in this paper generates smooth unions of mesh components regardless of the resolution of the final mesh, increasing the overall quality of the resulting mesh. Figure 13 top shows a junction in a neurite bifurcation in detail, while Figure 13 bottom shows a soma-neurite junction. Top images: close view of a generated neurite fork junction rendered in shading mode left image and wireframe mode right image.
Bottom images: close view of soma—neurite junction for neuron meshes with homogeneous level of detail left image and adaptive level of detail based on the camera distance right image. Upcoming SlideShare. Like this presentation? Why not share! Embed Size px.
Start on. Show related SlideShares at end. WordPress Shortcode. Published in: Automotive. Full Name Comment goes here. Are you sure you want to Yes No. Be the first to like this. No Downloads. Views Total views. Actions Shares. Embeds 0 No embeds. No notes for slide. Description this book The aim of the second edition of this book is to provide a comprehensive survey of the different algorithms and data structures useful for triangulation and meshing construction. Mesh Generation 2nd edition: Application to Finite Elements [PDF] The aim of the second edition of this book is to provide a comprehensive survey of the different algorithms and data structures useful for triangulation and meshing construction.