D7net Mini Sh3LL v1
Current File : /var/www/html/hpsclab/authors/../hardware/index.xml |
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Hardwares | HPSC Smart Lab</title>
<link>/hardware/</link>
<atom:link href="/hardware/index.xml" rel="self" type="application/rss+xml" />
<description>Hardwares</description>
<generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language>

<item>
<title>Cluster HPC Bluejeans</title>
<link>/hardware/bluejeans/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/hardware/bluejeans/</guid>
<description><p>Bluejeans Hardware Features<br />
Beowulf is a multi-computer architecture which can be used for parallel and distributed computations.<br />
Bluejeans (Bj) is a Beowulf HPC cluster based of DSA-LabMNCP, composed by 36 workingnodes and 4 servicenode connected together via switch Ethernet 1000 Mb/s dedicated.<br />
Workingnode features:<br />
n.1 CPU Intel Dual Core 2,6 GHz;<br />
n.1 1 GB RAM;<br />
n.1 Ethernet connection 1000 Mb/s.<br />
Servicenodes Features:<br />
n.1 CPU Intel Dual Core 2,6 GHz;<br />
n.2 1 GB RAM;<br />
n.2 Ethernet connection 1000 Mb/s.</p>
<p>Bj: Front view Bj: side view</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="/img/bjfront.jpg" alt="bjfront" /></td>
<td><img src="/img/hpc.jpg" alt="bjside" /></td>
</tr>
</tbody>
</table>
<p>The servicenodes has been configurate for export the followed services:<br />
User Authentication;<br />
NFS Server;<br />
Data Storage and Data Backup;<br />
SSH login server;<br />
Followed you can see the shema hardware of the DSA-LabMNCP/sHPC-Bluejeans:</p>
<p><div style="text-align: center;"> <img src="/img/Bj-Arch.jpg" alt="bjArch" />
<div style="text-align: left;">
In total the Bj is composed by 40 machines with 80 core available for parallel.<br />
and distributed computation.<br />
The data storage server export to the servicenodes 8 hard disk 2 Tera (mirrored) dedicated for output simulation storage and data backup.</p>
<p><div style="text-align: center;"> <img src="/img/bj-2.jpg" alt="bj" />
<div style="text-align: left;">
The goal of DSA Bluejeans cluster is provide computational resource and distributed environment at the DSA research activities and the DSA-LabMNCP Team can run their batch jobs and distributed compute under the resource manager Torque (PBS).The Torque scheduler provide the followed features:</p>
<ul>
<li>Fault Tolerance Additional failure conditions checked/handled Node health check script support;</li>
<li>Scheduling Interface;</li>
<li>Scalability;</li>
<li>Usability;</li>
</ul>
<p>Also, you can see the Bj runtime today performance at this link.</p>
</description>
</item>
<item>
<title>Genesis GE-i940 Tesla</title>
<link>/hardware/genesis/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/hardware/genesis/</guid>
<description><p>On September 28, 2009, a workstation Genesis GE-i940 Tesl, based on both GPGPU* and nVidia/CUDA** Technologies has been installed at DSA/LabMNCP.</p>
<p>It is a testbed for developing advanced simulation in the following research field:</p>
<ul>
<li><span style="color: blue;">Stochastic simulation;</li>
<li><span style="color: blue;">Molecular Dynamics;</li>
<li><span style="color: blue;">Atmospheric and climate modeling;</li>
<li><span style="color: blue;">Weather forecast investigation;</li>
<li><span style="color: blue;">Grid/Cloud Hybrid Virtualization;</li>
</ul>
<p><span style="color: blue;">*</span><br />
“GPGPU stands for General-Purpose computation on Graphics Processing Units, also known as GPU Computing. Graphics Processing Units (GPUs) are high-performance many-core processors capable of very high computation and data throughput. See more <a href="https://www.ibiblio.org/" target="_blank">here</a>.”</p>
<p><span style="color: blue;">**</span><br />
“NVIDIA® CUDA™ is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. See more <a href="https://developer.nvidia.com/about-cuda" target="_blank">here</a>. “</p>
<table>
<thead>
<tr>
<th><span style="font-size: 18px; color: blue;">Hardware</span></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td><img src="/img/ge.jpg" alt="ge-image" /></td>
</tr>
<tr>
<td>Mainboard</td>
<td>Asus x58/ICH10R 3 PCI-Express x16, 6 SAT, 2 SAS, 3+6 USB</td>
</tr>
<tr>
<td>CPU</td>
<td>i7-940 2,93 133 GHz fsb, Quad Core 8 Mb cache</td>
</tr>
<tr>
<td>RAM</td>
<td>6 x 2Gb DRR 3 1333 DIM</td>
</tr>
<tr>
<td>Hard Disk</td>
<td>2 x 500 Gb SATA 16Mb cache 7.200 RPM</td>
</tr>
<tr>
<td>GPU</td>
<td>1 Quadro FX5800 4Gb RAM</td>
</tr>
<tr>
<td></td>
<td>2 x Tesla C1060 4 Gb RAM</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><span style="font-size: 18px; color: blue;">Software</span></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>OS:</td>
<td><a href="https://www.centos.org/" target="_blank">GNU/Linux CentOs 5.3 64 Bit</a></td>
</tr>
<tr>
<td>Driver:</td>
<td><a href="https://www.nvidia.com/object/thankyou_linux.html?url=/compute/cuda/2_1/drivers/NVIDIA-Linux-x86_64-180.22-pkg2.run" target="_blank">nVidia Cuda 180.22 Linux 64bit</a></td>
</tr>
<tr>
<td>VMware:</td>
<td><a href="https://www.vmware.com/" target="_blank">VMware-server-2.0.2</a></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><span style="font-size: 18px; color: blue;">OUTPUT of First Test:</span></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Serial simulation(ms)</td>
<td>GPU(ms)</td>
</tr>
<tr>
<td>execution time for malloc</td>
<td>0.02</td>
<td>175.21 ms</td>
</tr>
<tr>
<td>execution time for RndGnr</td>
<td>51430.92</td>
<td>2283.19</td>
</tr>
<tr>
<td>execution time for init</td>
<td>275.48</td>
<td>0.31</td>
</tr>
<tr>
<td>execution time for computing</td>
<td>391391.12</td>
<td>329.19 ms</td>
</tr>
<tr>
<td>execution time for I/O</td>
<td>56822.77</td>
<td>64740.54 ms</td>
</tr>
<tr>
<td>execution time for GPU/CPU</td>
<td></td>
<td>198.43 ms</td>
</tr>
</tbody>
</table>
<p><span style="font-size: 18px; color: blue;">Output using GPU,</span>
<pre>
device 0 : Quadro FX 5800
device 1 : Tesla C1060
device 2 : Tesla C1060</p>
<p>Selected device: 2 &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;</p>
<p>device 2 : Tesla C1060
major/minor : 1.3 compute capability
Total global mem : -262144 bytes
Shared block mem : 16384 bytes
RegsPerBlock : 16384
WarpSize : 32
MaxThreadsPerBlock : 512
TotalConstMem : 65536 bytes
ClockRate : 1296000 (kHz)
deviceOverlap : 1
deviceOverlap : 1
MultiProcessorCount: 30</p>
<hr />
<p>Using 1048576 particles
100 time steps</p>
<hr />
<p></pre></p>
</description>
</item>
<item>
<title>GreenJeans</title>
<link>/hardware/greenjeans/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/hardware/greenjeans/</guid>
<description><p>GreenJeans is new experimental HPC Cluster/Beowulf of DSA build up with the aim to create both economy and enviroment sustainable solution for the Scientific HPC field.</p>
<p><div style="text-align: center;"> <img src="/img/logogreen.png" alt="logogreen" /></p>
<p><span style="color: blue;">GreenJeans Making of&hellip;</span><br />
<img src="/img/gj.gif" alt="gj" />
<div style="text-align:left">
Over the Green we have installed the followed software:</p>
<ul>
<li><a href="https://developer.nvidia.com/cuda-downloads" target="_blank">CUDA</a>(Driver / Toolkit / SDK)</li>
<li><a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html" target="_blank">SDK Java Sun</a></li>
<li>MPICH4 V1</li>
<li>MPICH4 V2</li>
<li>MPI2-VMI</li>
<li><a href="https://developer.nvidia.com/cuda-downloads" target="_blank">Eucalyptus</a>(KVM/QEMU Hypervisor)</li>
<li><a href="http://greenjeans.uniparthenope.it/ganglia" target="_blank">Ganglia</a></li>
<li><a href="http://www.clusterresources.com/products/torque-resource-manager.php" target="_blank">Torque</a></li>
</ul>
<p>Every work node of GreenJeans have installed nVidia GeForce GTX Ti<br />
<pre>
Device 0: “GeForce GTX 560 Ti”<br />
CUDA Driver Version: 4.0<br />
CUDA Runtime Version: 4.0<br />
CUDA Capability Major/Minor version number: 2.1<br />
Total amount of global memory: 1072889856 bytes<br />
Multiprocessors x Cores/MP = Cores: 8 (MP) x 48 (Cores/MP) = 384 (Cores)<br />
Total amount of constant memory: 65536 bytes<br />
Total amount of shared memory per block: 49152 bytes<br />
Total number of registers available per block: 32768<br />
Warp size: 32<br />
Maximum number of threads per block: 1024<br />
Maximum sizes of each dimension of a block: 1024 x 1024 x 64<br />
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535<br />
Maximum memory pitch: 2147483647 bytes<br />
Texture alignment: 512 bytes<br />
Clock rate: 1.64 GHz<br />
Concurrent copy and execution: Yes<br />
Run time limit on kernels: No<br />
Integrated: No<br />
Support host page-locked memory mapping: Yes<br />
Compute mode: Default (multiple host threads can use this device simultaneously)<br />
Concurrent kernel execution: Yes<br />
Device has ECC support enabled: No<br />
Device is using TCC driver mode: No<br />
</pre>
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce GTX 560 Ti</p>
</description>
</item>
</channel>
</rss>
AnonSec - 2021 | Recode By D7net