Line: 1 to 1 | |||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||||||||||||||
Added: | |||||||||||||||||||||||||||
> > | 1. WebHome. 2. DocuMentacoes. 3. HistoricoLabiocomp | ||||||||||||||||||||||||||
____________________________________________________________________________________________________________________________________________________________________________________________
1) Configurações do Ldap nos nós do Cluster
# apt-get install libnss-ldap libpam-ldap nscd LDAP server URI: ldap://192.168.1.2/ Distinguished name of the search base: dc=republica,dc=star,dc=wars LDAP version to use: 3 LDAP account for root: cn=admin,dc=republica,dc=star,dc=wars LDAP root account password: <vocês sabem qual né? Não vou colocar aqui> Allow LDAP admin account to behave like local root? No Does the LDAP database require login?: No No arquivo /etc/ldap/ldap.conf adicione as seguintes linhas: BASE dc=republica,dc=star,dc=wars URI ldap://192.168.1.2 No arquivo /etc/nsswitch.conf modifique conforme as linhas abaixo: passwd: compat ldap group: compat ldap shadow: compat ldap # /etc/init.d/nscd restart Teste a conf: # id testeldap Deve retornar o seguinte: uid=2000(testeldap) gid=100(users) grupos=100(users) Rafael Gomes Consultor em TI LPIC-1 MCSO (71) 8318-0284 ________________________________________________________________________________________________________________________________________________________________________________________________ 2) Configurações do SSH sem senha, proxy, NFS e torque: Vitor Vilas Boas vitorvilas@gmail.com 11/16/11 to r2d2_ufba Galera, já está ok para meu usuário o SSH sem senha no cluster02 a partir do darthvader. O que ainda falta fazer: - Colocar o darthvader em todos os arquivos hosts. - Cada usuário criar sua chave no ssh do darthvader e criar o arquivo authorized_keys. - Modificar o fstab de todos os nós para que utilize o /home do darthvader e não do clustermaster como está setado. Vou fazer isso hoje, até o final da tarde concluo. Para os usuários fazerem a chave SSH, vou colocar aqui como fazer, e se alguém puder colocar na wiki (Gomex se puder...) 1 - Logar no darthvader com o próprio usuário; 2 - Gerar a chave com o comando --> ssh-keygen -t rsa 3 - Dar enter até terminar o processo, não colocando frase alguma, deixando a frase em branco. 4 - Entrar no diretório que estão localizadas as chaves ---> cd ~/.ssh 5 - Gerar o authorized_keys --> cat id_rsa.pub >> authorized_keys 6 - Fazer o teste logando no cluster02 --> ssh cluster02 (Caso precise, aceitar o certificado com yes) e refazer o teste. add no /etc/profile das estações (internet) export http_proxy=http://192.168.1.1:3128 NFS add no /etc/fstab (monta home NFS) darthvader.bio.intranet.ufba.br:/home /home nfs rw,hard,intr 0 0 Torque nas estações: /var/spool/torque/server_name > uma única linha com o nome do servidor "darthvader" /var/spool/torque/mom_priv/config > adicionar nome do servidor "darthvader" Solução para qualquer máquina que possa ter sido desligada incorretamente: rm -rf /var/run/network$ rm -rf /var/run/network/mountnfs __________________________________________________________________________________________________________________________________________________ 3) Arquivo de configuração do DHCP /etc/dhcp/dhcpd.conf (Aqui estão as configurações que ligam os IPs com seus respectivos MACs) /etc/init.d/isc-dhcp-server restart (Comando para reiniciar o serviço sem a necessidade de reiniciar o sistema todo) __________________________________________________________________________________________________________________________________________________ 4) Clustal-mpi 4a) README CLUSTAL-MPI *************************** ************************************************ CLUSTALW-MPI: ClustalW? Analysis Using Grid and Parallel Computing based on ClustalW? , the multiple sequence alignment program (version 1.82, Feb 2001) ***************************************************************************** This README contains the help with INSTALLATION ClustalW? is a popular tool for multiple sequence alignment. The alignment is achieved via three steps: pairwise alignment, guide-tree generation and progressive alignment. ClustalW? -MPI is an MPI implementation of ClustalW? . Based on version 1.82 of the original ClustalW? , both the pairwise and progressive alignments are parallelized with MPI, a popular message passing programming standard. ClustalW? -MPI is freely available to the user community. The software is available at http://www.bii.a-star.edu.sg/software/clustalw-mpi/ The original ClustalW? /ClustalX can be found at ftp://ftp-igbmc.u-strasbg.fr. Please send bug reports, comments etc. to "kuobin@bii.a-star.edu.sg". INSTALLATION (for Unix/Linux) ------------ This is an extremely quick installation guide. 1. Make sure you have MPICH or LAM installed on your system. 2. Unpack the package in any working directory: tar xvfp clustalw-mpi-0.1.tar.gz 3. Take a look at the Makefile and make the modifications that you might desire, in particular: CC = mpicc CFLAGS = -c -g or CFLAGS = -c -O3 4. Build the whole thing simply by typing "make". 5. If you wanted to use serial codes to compute the neighbor-joining tree, you would have to define the macro "SERIAL_NJTREE" when compiling trees.c: CFLAGS = -c -g -DSERIAL_NJTREE This macro is defined in the default Makefile. That is, to use MPI codes in neighbor-joining tree, you have to "undefine" the macro "SERIAL_NJTREE" in your Makefile. SAMPLE USAGE (for Unix/Linux) ------------ 1. To make a full multiple sequence alignment: (using one master node and 4 computing nodes) %mpirun -np 5 ./clustalw-mpi -infile=dele.input %mpirun -np 5 ./clustalw-mpi -infile=CFTR.input 2. To make a guide tree only: %mpirun -np 5 ./clustalw-mpi -infile=dele.input -newtree=dele.mytree %mpirun -np 5 ./clustalw-mpi -infile=CFTR.input -newtree=CFTR.mytree 3. To make a multiple sequence alignment out of an existing tree: %mpirun -np 5 ./clustalw-mpi -infile=dele.input -usetree=dele.mytree %mpirun -np 5 ./clustalw-mpi -infile=CFTR.input -usetree=CFTR.mytree 4. The environment variable, CLUSTALG_PARALLEL_PDIFF, could be used to run the progressive alignment based on the parallelized pdiff(). By default the variable CLUSTALG_PARALLEL_PDIFF is not set, and the progressive alignment will be parallelized accroding the structure of the neighbor-joining tree. However, parallelized pdiff() will still be used in the later stage when prfalign() tries to align more distant sequences to the profiles. If you don't understand this, simply leave the variable unset. KNOWN PROBLEM ------------ 1. On Intel IA32 platforms, slightly different neighbor-joining trees might be obtained with and without enabling the compiler's optimization flags. This is due to the fact that Intel processors use 80-bit FPU registers to cache "double" variables, which are supposed to be 64-bit long. With '-O1' or above optimizer flag, the compiler would not always immediately save the variables involved in a double operation back to memory. Instead, intermediate results will be saved in registers, having 80-bit of precision. This would cause problem for nj_tree() because it is sensitive to the precision of floating point numbers. Solutions: (1) Other platforms, including Intel's IA64, don't seem to have this problem. or (2) Building "trees.c" with optios like the below: (potentially with high performance overhead) %gcc -c -O3 -ffloat-store trees.c // GNU gcc %icc -c -O3 -mp trees.c // Intel C compiler or (3) Decalring relevant variables as "volatile" in nj_tree(): volatile double diq, djq, dij, d2r, dr, dio, djo, da; volatile double *rdiq; rdiq = (volatile double *)malloc(((last_seq-first_seq+1)+1)* sizeof(volatile double)); ... ... free((void)rdiq); OBS: o clustalw-mpi já está instalado em todo o cluster. 1. Introduction of ClustalW? -MPI ClustalW? is a general-purpose multiple sequence alignment program for DNA or proteins. The alignment is achieved via three steps:
2. Input Sequences p All sequences input must be in 1 file, one after another. 7 formats are automatically recognised: NBRF/PIR, EMBL/SWISSPROT, Pearson (Fasta), Clustal ( .aln), GCG/MSF (Pileup), GCG9/RSF and GDE flat file. All non-alphabetic characters (spaces, digits, punctuation marks) are ignored except "-" which is used to indicate a GAP ("." in GCG/MSF). p If the input file is in GenBank? (.gb) or other formats which is not supported by the Clustal W and can't be converted by Clustal X, you can used EMBOSS-3.0($ seqret -osformat fasta) and convert the file to Fasta ( .fasta) format beforehand. *3. Job Submission and Monitoring Sample PBS Script for a 4-node job - "clustalw.pbs" You can use the following script to build a full multiple sequence alignment job using 4 computing nodes (8 CPUs): To edit the script, you may run pico, e.g. % pico clustalw.pbs
Job Submission Then, you can run the job by submitting the script to PBS as follows: % qsub clustalw.pbs For other PBS commands, please refer to the section "Starter Guide for PBS" Job Monitoring List the current jobs on the cluster %qstat List the currently running jobs on the cluster %qstat –a Lists nodes allocated to running jobs %qstat –r Show detailed information on a specific job % qstat –n Show detailed information for all queues %qstat -f <jobid> Advance PBS command: Advanced users can type % man q_command to see the details of the following command.
__________________________________________________________________________________________________________________________________________________
5) Instalar Mega no Debian squeeze
5.1) Adicionar #deb http://ubuntu.mirror.cambrium.nl/ubuntu/ natty main universe no sources.list
5.2) Instalar o wine1.2: aptitude install wine 1.2
5.3) Baixar wine1.2-gecko do mirror: mirror.pnl.gov/ubuntu/
5.4) Instalar o wine1.2-gecko usando Gdebi
5.5) Adicionar o repositório do mega "deb http://update.megasoftware.net/deb/ mega main" no sources.list
5.6) Instalar o mega: aptitude install mega
Rodrigo.
____________________________________________________________________________________________________________________________________________________
6) MPICH2 - Debian e RedHat?
Copiado de http://www.flaviotorres.com.br/fnt/artigos/mpich2.php
Por: Flavio Torres - ftorres[@]ymail.com Publicado em: 03/08/2007 Instalando MPICH2 em Sistemas Linux. MPICH é uma das implementações existentes para o padrão MPI (Message-Passing Interface) de bibliotecas de passagem de mensagem. Além da biblioteca MPI, MPICH contém um ambiente de programaçao que inclui um conjunto de bibliotecas para análise de performance (profiling) de programas MPI e uma interface gráfica para todas as ferramentas. Em outras palavras, com o MPI é possível você ter um único processo sendo executado em múltiplos servidores, um Cluster. Este é um segundo artigo, para aqueles que não possuem um ubuntu dapper, ou brigaram muito com o python 2.3 Arquivos necessários: * gcc * cpp * libc6 * lib6c-dev * g77 * g++ * Python 2.2 ou superior Instalando os arquivos necessários: apt-get install gcc cpp libc6 libc6-dev g77 g++ Python, em 99% das instalações já vem na versão 2.4 ou 2.5. Obtendo o tarball do mpich2, site do projeto: http://www-unix.mcs.anl.gov/mpi/mpich/ wget http://www-unix.mcs.anl.gov/mpi/mpich/downloads/mpich2-1.0.5p4.tar.gz Descompacte o arquivo dentro de seu home: tar -xvzf mpich2-1.0.5p4.tar.gz ; cd mpich2-1.0.5p4 Compile e instale: ./configure -prefix=/home/you/mpich2-install |& tee configure.log make |& tee make.log make install |& tee install.log Se não utilizar um prefix, o default será /usr/local/bin. Adicione o local de instalação em seu $PATH; Para csh e tcsh: setenv PATH /home/you/mpich2-install/bin:$PATH Para Bash e sh: export PATH=/home/you/mpich2-install/bin:$PATH Checando se está tudo em ordem: which mpd which mpiexec which mpirun O which deve te retornar o local de instalação dos executáveis. Após instalar em todos os hosts, devemos configurar os nomes para a resolução, edite o seu /etc/hosts e configure todas as máquinas: vi /etc/hosts 192.168.0.1 host1 192.168.0.5 host2 192.168.0.2 host3 Agora, devemos configurar o ssh para conexão sem senha, entre todos os hosts. Gerando a chave, lembre-se de fazer em todos os servidores: ssh-keygen -t dsa -b 1024 * Não digite a senha quando for questionado, tecle <enter> Agora devemos configurar o ssh para autenticar sem senha: Passo1) Copiando a chave do host1 para o host2 e host3: host1$ scp .ssh/id_dsa.pub usuario@host2: host1$ scp .ssh/id_dsa.pub usuario@host3: Passo2) Configurando a chave para autenticação no host2: host2$ cat id_dsa.pub >> .ssh/authorized_keys host2$ chown 600 .ssh/authorized_keys Passo3) Configurando a chave para autenticação no host3: host3$ cat id_dsa.pub >> .ssh/authorized_keys host3$ chown 600 .ssh/authorized_keys Agora, repita os 3 Passos para as 3 máquinas, no final de tudo, você deve realizar ssh sem senha da máquina: host1 > host2 e host3 host2 > host1 e host3 host3 > host2 e host1 Configurando os arquivos do mpi que são: * mpd.conf * mpd.hosts O arquivo mpd.conf contém a informação de autenticação do mpi entre as máquinas, por isto,a senha deve ser a MESMA em todos os hosts. Se você está utilizando o root para realizar os testes, este arquivo deve estar dentro de /etc, caso esteja utilizando algum usuário comum, este arquivo deve estar dentro de $HOME. Adicionando a senha do mpd.conf e copiando para as outras máquinas: host1$ echo "MPD_SECRETWORD=mr45-j9z" > .mpd.conf host1$ chmod 600 .mpd.conf host1$ scp .mpd.conf host2: host1$ scp .mpd.conf host3: O arquivo mpd.hosts contém as máquinas que fazem parte do cluster para ESTE usuário; Adicionando as máquinas do cluster no arquivo .mpd.hosts, e replicando para as outras máquinas: host1$ echo "host1" > .mpd.hosts host1$ echo "host2" >> .mpd.hosts host1$ echo "host3" >> .mpd.hosts host1$ scp .mpd.hosts host2: host1$ scp .mpd.hosts host3: Pronto, a parte chata já passou Agora vamos iniciar o daemon do mpi, com o mpdboot: host1$ mpdboot -n 3 -f .mpd.hosts host1$ mpdtrace host1 host2 host3 Perfeito, estão todos respondendo!! Agora é só brincar com um teste simples: host1$ mpiexec -n 5 mpich2-1.0.5p4/examples/cpi Process 0 of 5 is on host1 Process 2 of 5 is on host2 Process 1 of 5 is on host3 Process 4 of 5 is on host3 Process 3 of 5 is on host1 pi is approximately 3.1415926544231230, Error is 0.0000000008333298 wall clock time = 0.925560 ____________________________________________________________________________________________________________________________________________________________________________________________ | |||||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||||
< < | 7) mrbayes-multi | ||||||||||||||||||||||||||
> > | 7) mrbayes-multi | ||||||||||||||||||||||||||
Fonte: http://nebc.nerc.ac.uk/bioinformatics/docs/mrbayes-multi.html
---------------- This package uses the Debian Alternatives system to allow you to choose between the different MPI implementations. You can check which version of MPI you use with update-alternatives --list mpirun update-alternatives --list mpi (for the development files) Use 'update-alternatives --display' to list all installed implementations, and 'update-alternatives --config' to configure which implementation to use. update-alternatives --config mpirun para fazer o setup do mpirun (usar este). __________________________________________________________________________________________________________________________________________________ -- RodrigoZucoloto - 02 Nov 2011 \ No newline at end of file |