D7net Mini Sh3LL v1

 
OFF  |  cURL : OFF  |  WGET : ON  |  Perl : ON  |  Python : OFF
Directory (0775) :  /var/www/html/hpsc/hardware/

 Home   ☍ Command   ☍ Upload File   ☍Info Server   ☍ Buat File   ☍ Mass deface   ☍ Jumping   ☍ Config   ☍ Symlink   ☍ About 

Current File : /var/www/html/hpsc/hardware/index.xml
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Hardwares | HPSC Smart Lab</title>
    <link>/hardware/</link>
      <atom:link href="/hardware/index.xml" rel="self" type="application/rss+xml" />
    <description>Hardwares</description>
    <generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language>
    
    
    <item>
      <title>Cluster HPC Bluejeans</title>
      <link>/hardware/bluejeans/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/hardware/bluejeans/</guid>
      <description>&lt;p&gt;Bluejeans Hardware Features&lt;br /&gt;
Beowulf is a multi-computer architecture which can be used for parallel and distributed computations.&lt;br /&gt;
Bluejeans (Bj) is a Beowulf HPC cluster based of DSA-LabMNCP, composed by 36 workingnodes and 4 servicenode connected together via switch Ethernet 1000 Mb/s dedicated.&lt;br /&gt;
Workingnode features:&lt;br /&gt;
n.1 CPU Intel Dual Core 2,6 GHz;&lt;br /&gt;
n.1 1 GB RAM;&lt;br /&gt;
n.1 Ethernet connection 1000 Mb/s.&lt;br /&gt;
Servicenodes Features:&lt;br /&gt;
n.1 CPU Intel Dual Core 2,6 GHz;&lt;br /&gt;
n.2 1 GB RAM;&lt;br /&gt;
n.2 Ethernet connection 1000 Mb/s.&lt;/p&gt;

&lt;p&gt;Bj: Front view Bj: side view&lt;/p&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&#34;/img/bjfront.jpg&#34; alt=&#34;bjfront&#34; /&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src=&#34;/img/hpc.jpg&#34; alt=&#34;bjside&#34; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The servicenodes has been configurate for export the followed services:&lt;br /&gt;
User Authentication;&lt;br /&gt;
NFS Server;&lt;br /&gt;
Data Storage and Data Backup;&lt;br /&gt;
SSH login server;&lt;br /&gt;
Followed you can see the shema hardware of the DSA-LabMNCP/sHPC-Bluejeans:&lt;/p&gt;

&lt;p&gt;&lt;div style=&#34;text-align: center;&#34;&gt; &lt;img src=&#34;/img/Bj-Arch.jpg&#34; alt=&#34;bjArch&#34; /&gt;
&lt;div style=&#34;text-align: left;&#34;&gt;
In total the Bj is composed by 40 machines with 80 core available for parallel.&lt;br /&gt;
and distributed computation.&lt;br /&gt;
The data storage server export to the servicenodes 8 hard disk 2 Tera (mirrored) dedicated for output simulation storage and data backup.&lt;/p&gt;

&lt;p&gt;&lt;div style=&#34;text-align: center;&#34;&gt; &lt;img src=&#34;/img/bj-2.jpg&#34; alt=&#34;bj&#34; /&gt;
&lt;div style=&#34;text-align: left;&#34;&gt;
The goal of DSA Bluejeans cluster is provide computational resource and distributed environment at the DSA research activities and the DSA-LabMNCP Team can run their batch jobs and distributed compute under the resource manager Torque (PBS).The Torque scheduler provide the followed features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fault Tolerance Additional failure conditions checked/handled Node health check script support;&lt;/li&gt;
&lt;li&gt;Scheduling Interface;&lt;/li&gt;
&lt;li&gt;Scalability;&lt;/li&gt;
&lt;li&gt;Usability;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, you can see the Bj runtime today performance at this link.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Genesis GE-i940 Tesla</title>
      <link>/hardware/genesis/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/hardware/genesis/</guid>
      <description>&lt;p&gt;On September 28, 2009, a workstation Genesis GE-i940 Tesl, based on both GPGPU* and nVidia/CUDA** Technologies has been installed at DSA/LabMNCP.&lt;/p&gt;

&lt;p&gt;It is a testbed for developing advanced simulation in the following research field:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Stochastic simulation;&lt;/li&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Molecular Dynamics;&lt;/li&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Atmospheric and climate modeling;&lt;/li&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Weather forecast investigation;&lt;/li&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Grid/Cloud Hybrid Virtualization;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;*&lt;/span&gt;&lt;br /&gt;
“GPGPU stands for General-Purpose computation on Graphics Processing Units, also known as GPU Computing. Graphics Processing Units (GPUs) are high-performance many-core processors capable of very high computation and data throughput. See more &lt;a href=&#34;https://www.ibiblio.org/&#34; target=&#34;_blank&#34;&gt;here&lt;/a&gt;.”&lt;/p&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;**&lt;/span&gt;&lt;br /&gt;
“NVIDIA® CUDA™ is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. See more &lt;a href=&#34;https://developer.nvidia.com/about-cuda&#34; target=&#34;_blank&#34;&gt;here&lt;/a&gt;. “&lt;/p&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;span style=&#34;font-size: 18px; color: blue;&#34;&gt;Hardware&lt;/span&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src=&#34;/img/ge.jpg&#34; alt=&#34;ge-image&#34; /&gt;&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Mainboard&lt;/td&gt;
&lt;td&gt;Asus x58/ICH10R 3 PCI-Express x16, 6 SAT, 2 SAS, 3+6 USB&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;i7-940 2,93 133 GHz fsb, Quad Core 8 Mb cache&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;6 x 2Gb DRR 3 1333 DIM&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Hard Disk&lt;/td&gt;
&lt;td&gt;2 x 500 Gb SATA 16Mb cache 7.200 RPM&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;GPU&lt;/td&gt;
&lt;td&gt;1 Quadro FX5800 4Gb RAM&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;2 x Tesla C1060 4 Gb RAM&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;span style=&#34;font-size: 18px; color: blue;&#34;&gt;Software&lt;/span&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OS:&lt;/td&gt;
&lt;td&gt;&lt;a href=&#34;https://www.centos.org/&#34; target=&#34;_blank&#34;&gt;GNU/Linux CentOs 5.3 64 Bit&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Driver:&lt;/td&gt;
&lt;td&gt;&lt;a href=&#34;https://www.nvidia.com/object/thankyou_linux.html?url=/compute/cuda/2_1/drivers/NVIDIA-Linux-x86_64-180.22-pkg2.run&#34; target=&#34;_blank&#34;&gt;nVidia Cuda 180.22 Linux 64bit&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;VMware:&lt;/td&gt;
&lt;td&gt;&lt;a href=&#34;https://www.vmware.com/&#34; target=&#34;_blank&#34;&gt;VMware-server-2.0.2&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;span style=&#34;font-size: 18px; color: blue;&#34;&gt;OUTPUT of First Test:&lt;/span&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Serial simulation(ms)&lt;/td&gt;
&lt;td&gt;GPU(ms)&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for malloc&lt;/td&gt;
&lt;td&gt;0.02&lt;/td&gt;
&lt;td&gt;175.21 ms&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for RndGnr&lt;/td&gt;
&lt;td&gt;51430.92&lt;/td&gt;
&lt;td&gt;2283.19&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for init&lt;/td&gt;
&lt;td&gt;275.48&lt;/td&gt;
&lt;td&gt;0.31&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for computing&lt;/td&gt;
&lt;td&gt;391391.12&lt;/td&gt;
&lt;td&gt;329.19 ms&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for I/O&lt;/td&gt;
&lt;td&gt;56822.77&lt;/td&gt;
&lt;td&gt;64740.54 ms&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for GPU/CPU&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;198.43 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;span style=&#34;font-size: 18px; color: blue;&#34;&gt;Output using GPU,&lt;/span&gt;
&lt;pre&gt;
device 0           : Quadro FX 5800
device 1           : Tesla C1060
device 2           : Tesla C1060&lt;/p&gt;

&lt;p&gt;Selected device: 2 &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&lt;/p&gt;

&lt;p&gt;device 2           : Tesla C1060
major/minor        : 1.3 compute capability
Total global mem   : -262144 bytes
Shared block mem   : 16384 bytes
RegsPerBlock       : 16384
WarpSize           : 32
MaxThreadsPerBlock : 512
TotalConstMem      : 65536 bytes
ClockRate          : 1296000 (kHz)
deviceOverlap      : 1
deviceOverlap      : 1
MultiProcessorCount: 30&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;Using 1048576 particles
100 time steps&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;/pre&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>GreenJeans</title>
      <link>/hardware/greenjeans/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/hardware/greenjeans/</guid>
      <description>&lt;p&gt;GreenJeans is new experimental HPC Cluster/Beowulf of DSA build up with the aim to create both economy and enviroment sustainable solution for the Scientific HPC field.&lt;/p&gt;

&lt;p&gt;&lt;div style=&#34;text-align: center;&#34;&gt; &lt;img src=&#34;/img/logogreen.png&#34; alt=&#34;logogreen&#34; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;GreenJeans Making of&amp;hellip;&lt;/span&gt;&lt;br /&gt;
&lt;img src=&#34;/img/gj.gif&#34; alt=&#34;gj&#34; /&gt;
&lt;div style=&#34;text-align:left&#34;&gt;
Over the Green we have installed the followed software:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://developer.nvidia.com/cuda-downloads&#34; target=&#34;_blank&#34;&gt;CUDA&lt;/a&gt;(Driver / Toolkit / SDK)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.oracle.com/technetwork/java/javase/downloads/index.html&#34; target=&#34;_blank&#34;&gt;SDK Java Sun&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;MPICH4 V1&lt;/li&gt;
&lt;li&gt;MPICH4 V2&lt;/li&gt;
&lt;li&gt;MPI2-VMI&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://developer.nvidia.com/cuda-downloads&#34; target=&#34;_blank&#34;&gt;Eucalyptus&lt;/a&gt;(KVM/QEMU Hypervisor)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://greenjeans.uniparthenope.it/ganglia&#34; target=&#34;_blank&#34;&gt;Ganglia&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.clusterresources.com/products/torque-resource-manager.php&#34; target=&#34;_blank&#34;&gt;Torque&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every work node of GreenJeans have installed nVidia GeForce GTX Ti&lt;br /&gt;
&lt;pre&gt;
Device 0:                                      “GeForce GTX 560 Ti”&lt;br /&gt;
CUDA Driver Version:                           4.0&lt;br /&gt;
CUDA Runtime Version:                          4.0&lt;br /&gt;
CUDA Capability Major/Minor version number:    2.1&lt;br /&gt;
Total amount of global memory:                 1072889856 bytes&lt;br /&gt;
Multiprocessors x Cores/MP = Cores:            8 (MP) x 48 (Cores/MP) = 384 (Cores)&lt;br /&gt;
Total amount of constant memory:               65536 bytes&lt;br /&gt;
Total amount of shared memory per block:       49152 bytes&lt;br /&gt;
Total number of registers available per block: 32768&lt;br /&gt;
Warp size:                                     32&lt;br /&gt;
Maximum number of threads per block:           1024&lt;br /&gt;
Maximum sizes of each dimension of a block:    1024 x 1024 x 64&lt;br /&gt;
Maximum sizes of each dimension of a grid:     65535 x 65535 x 65535&lt;br /&gt;
Maximum memory pitch:                          2147483647 bytes&lt;br /&gt;
Texture alignment:                             512 bytes&lt;br /&gt;
Clock rate:                                    1.64 GHz&lt;br /&gt;
Concurrent copy and execution:                 Yes&lt;br /&gt;
Run time limit on kernels:                     No&lt;br /&gt;
Integrated:                                    No&lt;br /&gt;
Support host page-locked memory mapping:       Yes&lt;br /&gt;
Compute mode:                         Default (multiple host threads can use this device simultaneously)&lt;br /&gt;
Concurrent kernel execution:                   Yes&lt;br /&gt;
Device has ECC support enabled:                No&lt;br /&gt;
Device is using TCC driver mode:               No&lt;br /&gt;
&lt;/pre&gt;
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce GTX 560 Ti&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>

AnonSec - 2021 | Recode By D7net