Brutus proxy pre-authentification error

brutus proxy pre-authentification error

The following procedure describes the first logging in to the IdM (Identity Management) Web UI with a password. After the first login you can configure your IdM. Brutus; Euler; Leonhard; MATLAB Distributed Computing If you try to login and receive a timeout error, then it is very likely that you. How to avoid a Brute Force Attack? Pre-requisites: Target: WordPress. Attacker: Kali Linux (WPscan). Burp Suite. brutus proxy pre-authentification error

Brutus proxy pre-authentification error - you will

GB files

Getting started with clusters

Requesting an account

Brutus

Brutus () is no longer in operation.

Euler

Everybody at ETH Zurich can use the Euler cluster. The first login of a new user triggers a process that sends a verification code to the users ETH email address ([email protected], with USERNAME being the ETH account name). The user is then prompted to enter the verification code and by entering the correct code, the cluster account of the user is created.

Leonhard

Leonhard Open () has been integrated in the Euler cluster.

Access to Leonhard Med is restricted to Leonhard Med shareholders. Guest users cannot access the Leonhard cluster.

MATLAB Distributed Computing Server (MDCS)

Any member of ETH can use the MATLAB Distributed Computing Server (MDCS) service; the only requirement is a valid ETH account. In order to use this service, you first need to login to the Euler cluster once and accept the usage agreement.

Please note that the MDCS will be phased out end of due to transitioning the batch system from IBM LSF to Slurm

CLC Genomics Server

The CLC genomics server uses local accounts for authentication. If you would like to use this service, then please contact cluster support to request your CLC account.

Please note that the CLC Genomics Server will be phased out end of due to transitioning the batch system from IBM LSF to Slurm

Accessing the clusters

Who can access the HPC clusters

The Euler cluster is open to all members of ETH and external users that have a collaboration with a research group at ETH Zurich. Members of other institutes who have a collaboration with a research group at ETH may use the HPC clusters for the purpose of said collaboration. Their counterpart ("sponsor") at ETH must ask the local IT support group (ISG) of the corresponding department to create an ETH guest account for them. The account needs to be linked to a valid e-mail address. For external users, the VPN service also needs to be enabled. Once the ETH guest account has been created, they can access the clusters like members of the ETH.

Legal compliance

The HPC clusters of ID SIS HPC are subject to ETH's acceptable use policy for IT resources (Benutzungsordnung für Telematik an der ETH Zürich, BOT). In particular:

  • Accounts are strictly personal.
  • You must not share your account (password, ssh keys) wih anyone else.
  • You must not use someone else's account, with our without their consent.
  • If you suspect that someone used your account, change your password and contact cluster support.

For changing your ETH password in the Identity and Access Management (IAM) system of ETH, please have a look at the documentation and the video of IT Services.

In case of abuse, the offender's account may be blocked temporarily or closed. System administrators are obliged by law to investigate abusive or illegal activities and report them to the relevant authorities.

Security

Access to the HPC clusters of ID SIS HPC is only possible via secure protocols ( ssh, sftp, scp, rsync). The HPC clusters are only accessible from inside the ETH network. If you would like to connect from a computer, which is not inside the ETH network, then you would need to establish a VPN connection first. Outgoing connections to computers inside the ETH network are not blocked. If you would like to connect to an external service, then please use the ETH proxy service (sprers.eu) by loading the module:

module load eth_proxy

sprers.eu

SSH

You can connect to the HPC clusters via the SSH protocol. For this purpose it is required that you have an SSH client installed. The information required to connect to an HPC cluster, is the hostname of the cluster that you would like to connect to and your ETH account credentials (username, password).

Cluster Hostname
Euler sprers.eu
Leonhard Open sprers.eu

Linux, Mac OS X

Open a shell (Terminal in OS X) and use the standard ssh command

ssh [email protected]hostname

where username is your ETH username and the hostname can be found in the table shown above. If for instance user sfux would like to access the Euler cluster, then the command would be

[email protected]:~$ ssh [email protected] [email protected]'s password: Last login: Fri Sep 13 from sprers.eu ____________________ ___ / ________ ___ /__/ / / _____/ / / / ___ / /_______/ /__/ /__/ /__/ Eidgenoessische Technische Hochschule Zuerich Swiss Federal Institute of Technology Zurich E U L E R C L U S T E R sprers.eu://sprers.eu [email protected] ========================================================================= [[email protected] ~]$

Windows

Since Windows does not provide an ssh client as part of the operating system, users need to download a third-party software in order to be able to establish ssh connections.

Widely used ssh clients are for instance MobaXterm, PuTTY and Cygwin.

For using MobaXterm, you can either start a local terminal and use the same SSH command as for Linux and Mac OS X, or you can click on the session button, choose SSH and then enter the hostname and username. After clicking on OK, you will be asked to enter your password.

If you use PuTTY, then it is sufficient to specify the hostname of the cluster that you would like to access and to click on the Open button. Afterwards, the users will be prompted to enter their ETH account credentials. When using Cygwin, then you can enter the same command as Linux and Mac OS X users.

ssh [email protected]hostname

SSH keys

ssh keys allow you to login to a cluster without having to type a password. This can be useful for file transfer and automated tasks. When you use ssh keys properly, then this is much safer than passwords. There are always pairs of keys, a private (stored on your local workstation) and a public (stored on the computer you want to connect to). You can generate as many key pairs as you want. In order to make the keys even more secure, you should protect them with a passphrase.

Linux, Mac OS X

For a good documentation on SSH please have a look at the SSH website. It contains a general overview on SSH, instructions on how to create SSH keys and instructions on how to copy an SSH key.

On your computer, use to generate a key pair with the algorithm. By default the private key is stored as and the public key as .

For security reasons, we recommend that you use a different key pair for every computer you want to connect to. For instance, if you are using both Euler and Leonhard:

ssh-keygen -t ed -f $HOME/.ssh/id_ed_euler # please enter a strong, non-empty passphrase when prompted ssh-keygen -t ed -f $HOME/.ssh/id_ed_leonhard # please enter a strong, non-empty passphrase when prompted

Once this is done, copy the public key to Euler or Leonhard using one of the commands:

ssh-copy-id -i $HOME/.ssh/id_ed_sprers.eu username@sprers.eu ssh-copy-id -i $HOME/.ssh/id_ed_sprers.eu username@sprers.eu

Where username is your ETH username. You will need to enter your ETH (LDAP) password to connect to Euler / Leonhard.

If you use an SSH agent, then you also need to add the key there (sprers.eu).

Windows

For windows a third party software (PuTTYgen,MobaXterm) is required to create SSH keys. For a good documentation on SSH please have a look at the SSH website.

Please either use PuTTYgen or the command (MobaXterm)

ssh-keygen -t ed

to generate a key pair with the algorithm and store both, the public and the private key on your local computer. For security reasons, we recommend that you use a different key pair for every computer you want to connect to. For instance, if you are using both Euler and Leonhard, then save the keys as and .

Afterwards please login to the cluster and create the hidden directory which needs to have the unix permission

mkdir -p -m $HOME/.ssh

In order to setup passwordless access to a cluster, copy the public key from your workstation to the directory on the cluster (for this example, we use the Euler cluster, if you would like to setup access to another cluster, then you need to use the corresponding hostname instead of sprers.eu) using for instance WinSCP or MobaXterm. The file needs to be stored as

$HOME/.ssh/authorized_keys

on the cluster.

Safety rules

  • Always use a (strong) passphrase to protect your SSH key. Do not leave it empty!
  • Never share your private key with somebody else, or copy it to another computer. It must only be stored on your personal computer
  • Use a different key pair for each computer you want to connect to
  • Do not reuse the key pairs for Euler / Leonhard for other systems
  • Do not keep open SSH connections in detached screen sessions
  • Disable the ForwardAgent option in your SSH configuration and do not use (or use to disable agent forwarding)

How to use keys with non-default names

If you use different key pairs for different computers (as recommended above), you need to specify the right key when you connect, for instance:

ssh -i $HOME/.ssh/id_ed_euler username@sprers.eu

To make your life easier, you can configure your ssh client to use this option automatically by adding the following lines in your file:

Host sprers.eu IdentityFile ~/.ssh/id_ed_leonhard Host sprers.eu IdentityFile ~/.ssh/id_ed_euler

First login

On your first login, you need to accept the cluster's usage rules. Afterwards your account is created automatically. Please find below the user agreement for the Euler cluster as an example:

Please note that the Euler cluster is subject to the "Acceptable Use Policy for Telematics Resources" ("Benutzungsordnung fuer Telematik", BOT) of ETH Zurich and relevant documents (sprers.eu), in particular: * your Euler account (like your ETH account) is *strictly personal* * you are responsible for all activities done under your account * you must keep your password secure and may not give it to a 3rd party * you may not share your account with anyone, including your supervisor * you may not use someone else's account, with or without their consent * you must comply with all civil and criminal laws (copyright, privacy, data protection, etc.) * any violation of these rules and policies may lead to administrative and/or legal measures Before you can proceed you must confirm that you have read, understood, and agree to the rules and policies mentioned above.

On Euler and Leonhard Open, the first login of a new user (for Leonhard Open only for shareholder users) triggers a process that sends a verification code to the users ETH email address ([email protected], with USERNAME being the ETH account name). The user is then prompted to enter the verification code and by entering the correct code, the cluster account of the user is created.

X11

The clusters of ID SIS HPC use the X window System (X11) to display a program's graphical user interface (GUI) on a users workstation. You need to install an X11 server on your workstation to siplay X11 windows. The ports used by X11 are blocked by the cluster's firewall. To circumvent this problem, you must open an SSH tunnel and redirect all X11 communication through that tunnel.

Linux

Xorg (X11) is normally installed by default as part of most Linux distributions. If you are using a version newer than , then please have a look at the troubleshooting section at the bottom of this wiki page.

ssh -Y [email protected]hostname

Mac OS X

Since X11 is no longer included in OS X, you must install XQuartz. If you are using a version newer than , then please have a look at the troubleshooting section at the bottom of this wiki page.

ssh -Y [email protected]hostname

Windows

X11 is not supported by Windows. You need to install a third-party application in order to use X11 forwarding. You can find a list of common X11 servers below:

VPN

When connecting from outside of the ETH network to one of our HPC clusters, you first need to establish a VPN connection. For installing a VPN client, please access sprers.eu in your browser. After logging in to the website, it will detect if there is already a VPN client installed on your computer and otherwise install one automatically. You can find more detailed instructions on the ETH website.

Please note that for establishing a VPN connection, you need to use your network password instead of your main password. If you did not yet set your network password, then please go to sprers.eu, login with your ETH account credentials and click on Passwort ändern. There you can set your network password.

sprers.eu

After establishing a VPN connection, you can login to our clusters via SSH.

Troubleshooting

Permission denied

If you enter 3 times a wrong password, then you will get a permission denied error:

[email protected]:~$ ssh [email protected] [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied (publickey,password,hostbased). [email protected]:~$

In case you receive a "Permission denied" error, please check if you entered the correct password. If you think that your account has been corrupted, then please contact the service desk of IT services of ETH Zurich.

If you enter a wrong password too many times or in a high frequency, then we might block access to the clusters for your account, because it could be correupted. If you account has been blocked by the HPC group, then please contact cluster support.

Timeout

If you try to login and receive a timeout error, then it is very likely that you tried to connect from outside of the ETH network to one of the HPC clusters.

[email protected]:~$ ssh -Y [email protected] ssh: connect to host sprers.eu port Connection timed out

Please either connect from the inside of the ETH network, or establish a VPN connection.

setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory

If you are using a Mac, can you please try to comment out the following lines in your /etc/ssh/ssh_config on your workstation:

Host * SendEnv LANG LC_*

This should solve the problem.

Too many authentication failures

This errors can be triggered if you have more than 6 private SSH keys in your local directory. In this case specify the SSH key to use and use the option, for example:

[email protected]:~$ ssh -i $HOME/.ssh/id_ed -o IdentitiesOnly=yes [email protected]

Indirect GLX rendering error

When using an SSH connection with X11 forwarding enabled, newer versions of the Xorg server show an error message, when the graphical user interface of an application is started:

X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: (GLX) Minor opcode of failed request: 3 (X_GLXCreateContext) Value in failed request: 0x0 Serial number of failed request: 27 Current serial number in output stream: 30

This error is caused by starting your X11 server without enabling the setting for indirect GLX rendering (iglx), that is required for X11 forwarding. Up to version of the Xorg server, the setting iglx, has been enabled by default. With version , the default has changed from +iglx to -iglx. Now the setting needs to be enabled either in the Xorg configuration file or with a command line setting, when starting the Xorg server manually. For Xquartz versions up to , the iglx setting is enabled by default. If you would like to use XQuartz or newer, then please make sure that you enable the iglx setting when the X-server is started.

This problem is described in the following article:

sprers.eu?page=news_item&px=Xorg-IGLX-Potential-Bye-Bye

Please find below some links, which address the problem for specific operating systems.

Data management

Introduction

On our cluster, we provide multiple storage systems, which are optimized for different purposes. Since the available storage space on our clusters is limited and shared between all users, we set quotas in order to prevent single users from filling up an entire storage system with their data.

A summary of general questions about file systems, storage and file transfer can be found in our FAQ. If you have questions or encounter problems with the storage systems provided on our clusters or file transfer, then please contact cluster support.

Personal storage (everyone)

Home

On our clusters, we provide a home directory (folder) for every user that can be used for safe long term storage of important and critical data (program source, script, input file, etc.). It is created on your first login to the cluster and accessible through the path

/cluster/home/username

The path is also saved in the variable . The permissions are set that only you can access the data in your home directory and no other user. Your home directory is limited to 16 GB and a maximum of ' files and directories (inodes). The content of your home is saved every hour and there is also a nightly backup (tape).

Scratch

We also provide a personal scratch directory (folder) for every user, that can be used for short-term storage of larger amounts of data. It is created, when you access it the first time through the path

/cluster/scratch/username

The path is also saved in the variable . It is visible (mounted), only when you access it. If you try to access it with a graphical tool, then you need to specify the full path as it is might not visible in the top-level directory. Before you use your personal scratch directory, please carefully read the usage rules to avoid misunderstandings. The usage rules can also be displayed directly on the cluster with the following command.

cat $SCRATCH/__USAGE_RULES__

Your personal scratch directory has a disk quota of TB and a maximum of 1'' files and directories (inodes). There is no backup for the personal scratch directories and they are purged on a regular basis (see usage rules).

For personal scratch directories, there are two limits (soft and hard quota). When reaching the soft limit ( TB) there is a grace period of one week where users can use 10% more than their allowed capacity (this upper limit is called hard quota), which applies to both, the number of inodes and space. If the used capacity is still above the soft limit after the grace period, then the current directory is locked for new writes until being again below the soft quota.

Group storage (shareholders only)

Project

Shareholder groups have the option to purchase additional storage inside the cluster. The project file system is designed for safe long-term storage of critical data (like the home directory). Shareholder groups can buy as much space as they need. The path for project storage is

/cluster/project/groupname

Access rights and restriction is managed by the shareholder group. We recommend to use ETH groups for this purpose. If you are interested in getting more information and prices of the project storage, then please contact cluster support.

Work

Apart from project storage, shareholder groups also have the option to buy so-called work (high-performance) storage. It is optimized for I/O performance and can be used for short- or medium-term storage for large computations (like scratch, but without regular purge). Shareholders can buy as much space as they need. The path for work storage is

/cluster/work/groupname

Access rights and restriction is managed by the shareholder group. We recommend to use ETH groups for this purpose. The directory is visible (mounted), only when accessed. If you are interested in getting more information and prices of the work storage, then please contact cluster support.

For /cluster/work directories, there are two limits (soft and hard quota). When reaching the soft limit there is a grace period of one week where users can use 10% more than their allowed capacity (this upper limit is called hard quota), which applies to both, the number of inodes and space. If the used capacity is still above the soft limit after the grace period, then the current directory is locked for new writes until being again below the soft quota.

Local scratch (on each compute node)

The compute nodes in our HPC clusters also have some local hard drives, which can be used for temporary storing data during a calculation. The main advantage of the local scratch is, that it is located directly inside the compute nodes and not attached via the network. This is very beneficial for serial, I/O-intensive applications. The path of the local scratch is

/scratch

You can either create a directory in local scratch yourself, as part of a batch job, or you can use a directory in local scratch, which is automatically created by the batch system. LSF creates a unique directory in local scratch for every job. At the end of the job, LSF is also taking care of cleaning up this directory. The path of the directory is stored in the environment variable

$TMPDIR

If you use , then you need to request scratch space from the batch system.

External storage

Please note that external storage is convenient to bring data in to the cluster or to store data for a longer time. But we recommend to not directly process data from external storage systems in batch jobs on Euler as this could be very slow and potentially put a high load on the external storage system. Please rather copy data from the external storage system to some cluster storage (home directory, personal scratch directory, project storage, work storage, or local scratch) before you process it in a batch job. After processing the data from a cluster storage system, you can copy the results back to the external storage system.

Central NAS/CDS

Groups who have purchased storage on the central NAS of ETH or CDS can ask the storage group of IT services to export it to our HPC clusters. There are certain requirements that need to be fulfilled in order to use central NAS/CDS shares on our HPC clusters.

  • The NAS/CDS share needs to be mountable via NFS (shares that only support CIFS cannot be mounted on the HPC clusters).
  • The NAS/CDS share needs to be exported to the subnet of our HPC clusters (please contact ID Systemdienste and ask them for an NFS export of your NAS/CDS share).
  • Please carefully set the permissions of the files and directories on your NAS/CDS share if other cluster users should not have read/write access to your data.

NAS/CDS shares are then mounted automatically when you access them. The mount-point of such a NAS/CDS share is

/nfs/servername/sharename

A typical NFS export file to export a share to the Euler cluster would look like

# cat /etc/exports /export /26(rw,root_squash,secure) /16(rw,root_squash,secure) /16(rw,root_squash,secure)

If you ask the storage group to export your share to the Euler cluster, then please provide them the above-shown information. If the NAS share is located on the IBM Spectrum Scale storage system, then please also ask for the following options to be set by the storage group:

PriviledgedPort=TRUE Manage_Gids=TRUE

Please note that these options should only be applied to the Euler subnet. For a general overview on subnets and IP addresses please check the following wiki page. When a NAS share is mounted on our HPC clusters, then it is accessible from all the compute nodes in the cluster.

Local NAS

Groups that operate their own NAS, can export a shared file system via NFSv3 to our HPC clusters. In order to use an external NAS on our HPC clusters, the following requirements need to be fullfilled

  • NAS needs to support NFSv3 (this is currently the only NFS version that is supported from our side).
  • The user and group ID's on the NAS needs to be consistent with ETH user names and group.
  • The NAS needs to be exported to the subnet of our HPC clusters.
  • Please carefully set the permissions of the files and directories on your NAS share if other cluster users should not have read/write access to your data.

We advise you to not use this path directly from your jobs. Rather, you should stage files to and from $SCRATCH.

You external NAS can then be accessed through the mount-point

/nfs/servername/sharename

A typical NFS export file to export a share to the Euler cluster would look like

# cat /etc/exports /export /26(rw,root_squash,secure) /16(rw,root_squash,secure) /16(rw,root_squash,secure)

For a general overview on subnets and IP addresses please check the following wiki page.

The share is automatically mounted, when accessed.

Central LTS (Euler)

Groups who have purchased storage on the central LTS of ETH can ask the ITS SD backup group to export it to the LTS nodes in the Euler cluster. There are certain requirements that need to be fulfilled in order to use central LTS shares on our HPC clusters.

  • The LTS share needs to be mountable via NFS (shares that only support CIFS cannot be mounted on the HPC clusters).
  • The LTS share needs to be exported to the LTS nodes of our HPC clusters (please contact ITS SD Backup group and ask them for an NFS export of your LTS share).
  • Please carefully set the permissions of the files and directories on your LTS share if other cluster users should not have read/write access to your data.

The LTS share needs to be exported to the LTS nodes:

(rw,root_squash,secure) (rw,root_squash,secure)

For accessing your LTS share, you would need to login to the LTS nodes in Euler with

ssh [email protected]

Where USERNAME needs to be replaced with your ETH account name. LTS shares are then mounted automatically when you access them. The mount-point of such a LTS share is

/nfs/sprers.eusharename(_repl)

or

/nfs/sprers.eusharename(_repl)

depending if your share is located on sprers.eu or sprers.eu

Backup

The users home directories are backed up every night and the backup has a retention time of 90 days. For project and work storage, we provide a weekly back up, with also 90 days retention time. If you have some data that you would like to exclude from the backup, then please create a subdirectory nobackup. Data stored in the nobackup directory will then be excluded from the backup. The subdirectory nobackup can be located on any level in the directory hierarchy:

/cluster/work/YOUR_STORAGE_SHARE/nobackup /cluster/work/YOUR_STORAGE_SHARE/project/nobackup /cluster/work/YOUR_STORAGE_SHARE/project/data/nobackup/filename /cluster/work/YOUR_STORAGE_SHARE/project/data/nobackup/subdir/filename

When large unimportant temporary data that changes a lot is backed up, then this will increase the size/pool of the backup and hence make the backup and the restore process slower. We would therefore like to ask you to exclude this kind of data from the backup of your group storage share if possible. Excluding large temporary data from the backup will help you and us restoring your important data faster in the case of an event.

Comparison

In the table below, we try to give you an overview of the available storage categories/systems on our HPC clusters as well as a comparison of their features.

Category Mount point Life span Snapshots Backup Retention time of backup Purged Max. size Small files Large files
Home /cluster/home permanent up to 7 days yes 90 days no 16 GB + o
Scratch /cluster/scratch 2 weeks no no - yes (files older than 15 days) TB o ++
Project /cluster/project 4 years optional yes 90 days no flexible + +
Work /cluster/work 4 years no yes 90 days no flexible o ++
Central NAS /nfs/servername/sharename flexible up to 8 days yes 90 days no flexible + +
Local scratch /scratch duration of job no no - end of job GB ++ +

Choosing the optimal storage system

When working on an HPC cluster that provides different storage categories/systems, the choice of which system to use can have a big influence of the performance of your workflow. In the best case you can speedup your workflow by quite a lot, whereas in the worst case the system administrator has to kill all your jobs and has to limit the number of concurrent jobs that you can run because your jobs slow down the entire storage system and this can affect other users jobs. Please take into account a few recommendations that are listed below.

  • Use local scratch whenever possible. With a few exceptions this will give you the best performance in most cases.
  • For parallel I/O with large files, the high-performance (work) storage will give you the best performance.
  • Don't create a large number of small files (KB's) on project or work storage as this could slow down the entire storage system.
  • If your application does very bad I/O (opening and closing files multiple times per second and doing small appends on the order of a few bytes), then please don't use project and work storage. The best option for this use-case would be local scratch.

If you need to work with a large amount of small files, then please keep them grouped in a tar archive. During a job you can then untar the files to the local scratch, process them and group the results again in a tar archive, which can then be copied back to your home/scratch/work/project space.

File transfer

In order to run your jobs on a HPC cluster, you need to transfer some data or input files from/to the cluster. For smaller and medium amounts of data, you can use some standard command line/graphical tools. If you need to transfer very large amounts of data (on the order of several TB), then please contact the cluster support and we will help you to set up the optimal strategy to transfer your data in a reasonable amount of time.

Command line tools

For transferring files from/to the cluster, we recommend to use standard tools like secure copy () or . The general syntax for using scp is

scp [options] source destination

For copying a file from your PC to an HPC cluster (to your home directory), you need to run the following command on your PC:

scp file username@hostname:

Where username is your ETH username and hostname is the hostname of the cluster. Please note the colon after the hostname. For copying a file from the cluster to your PC (current directory), you need to run the following command on your PC:

scp username@hostname:file .

For copying an entire directory, you would need to add the option . Therefore you would use the following command to transfer a directory from your PC to an HPC cluster (to your home directory).

scp -r directory username@hostname:

The general sytnax for is

rsync [options] source destination

In order to copy the content of a directory from your PC (home directory) to a cluster (home directory), you would use the following command.

rsync -Pav /home/username/directory/ username@hostname:/cluster/home/username/directory

The option enables to show the progress of the file transfer. The option preserves almost all file attributes and the option gives you more verbose output.

Graphical tools

Graphical clients allow you to mount your Euler home directory on your workstation. These clients are available for most operating systems.

  • Linux + Gnome: Connect to server
  • Linux + KDE: Konqueror, Dolphin, Filezilla
  • Mac OS X: MacFUSE, Macfusion, Cyberduck, Filezilla
  • Windows: WinSCP, Filezilla'

WinSCP provides the user a Windows explorer like user interface with a split screen that allows to transfer files per drag-and-drop. After starting your graphical client, you need to specify the hostname of the cluster that you would like to connect to and then click on the connect button. After entering your ETH username and password, you will be connected to the cluster and can transfer files.

WinSCP
sprers.eusprers.eu
Filezilla
sprers.eusprers.eu

Globus for fast file transfer

Infographic Globus univers


see Globus for fast file transfer

Quotas

The home and scratch directories on our clusters are subject to a strict user quota. In your home directory, the soft quota for the amount of storage that you can use is set to 16 GiB ( GB) and the hard quota is set to 20 GiB ( GB). Further more, you can store maximally ' files and directories (inodes). The hard quota for your personal scratch directory is set to TB. You can maximally have 1'' files and directories (inodes). You can check your current usage with the command.

[[email protected] ~]$ lquota ++++++ GB Soft quota: GB mhdd erase error Hard quota:

Brutus readme

Brutus - Authentication Engine Test Release 2sprers.eu

Changes in release AET2 :

1 - All user specified server response strings are converted to lowercase now as are the actual server responses.

2 - Fixed the problem encountered whilst trying to change the timeout during operation.

3 - Fixed problem with the default POP3 settings (related to fix 1 above.)

4 - Added brute force password generation

5 - Added save current session

6 - Added auto-save current session

7 - Added restore saved session

8 - Added save custom service

9 - Added load custom service

10 - Added password permutations

11 - Added word list creation functions

12 - Fixed update problems in the Auth. Seq. Definition window

13 - Added pause/resume functions

14 - Added semi-automatic 'learn' function for HTML form/CGI based services

15 - Added skip user on multiple password prompt failures

16 - Added 'use updated form fields' option to HTML form based services to enable attacks against services which use one time values in HTML form fields.

17 - Created a few example services, Netbus, IMAP, Cisc0 console, Cisc0 enable etc&hellip.only tested NetBus.

18 - Completed the 'view authentication sequence' display.

19 - Added SMB authentication for Windows and Samba servers (Only uses API at the moment so is very sloow)

This component of Brutus is capable of authenticating against a wide range of character based application protocols. This is used to facilitate dictionary based user/password attacks against various network applications. This release comes with the following built-in network applications :

HTTP - Basic authentication

HTTP - CGI application authentication (typically used with HTML forms)

There is also a custom facility which allows you to create your own authentication sequences tailored to your target in addition to being able to modify the built in applications. Using the custom facilty for instance it is possible to authenticate against IMAP, NNTP, IRC or nearly anything that uses plaintext user/password negotiation.

Using the pre-authentication option gives you the ability to perform some quite twisted dictionary attacks, for instance :

You can define an authentication sequence that will connect via some public SOCKS proxy to a UNIX server on offering telnet. Brutus can then log in to the UNIX server and then issue commands such as 'telnet ', Brutus will then run the dictionary attack against the target at What you have now is a 3 node hop online dictionary attack.

A simpler example of using a pre-authentication sequence might be to have Brutus connect brutus proxy pre-authentification error the target, again a UNIX server running telnet, and perhaps log in as an unprivileged user. It is then possible to have Brutus run a dictionary attack using su in an attempt to obtain the root password. At all times Brutus will maintain the 'conduit' telnet session which improves performance.

  • Support for up to 60 simultaneous sessions

  • Highly customisable authentication sequences

  • Single user mode, brutus proxy pre-authentification error, User List mode, User/Pass combo mode, Password only mode

  • Brute force password mode

  • Word list creation/generation/processing

  • Http 1.1 502 proxy error custom services

  • SOCKS support (with optional authentication)

  • Capable of + authentications/second over high speed connections

Brutus is still under development as is this component (the authentication engine). When Brutus is eventually finished, brutus proxy pre-authentification error, it will be made available, I have no idea when that will be. However, the next release (barring bug-fix releases) will contain my own SMB authentication routines which are much faster than using the WNet API, brutus proxy pre-authentification error, initially Protocols up to and including LANMAN2 will be available. I am also working on getting SSL support in without using sprers.eu although that may take a bit longer. The next release will be an extension of Brutus AET2 rather than a rewrite.

Issues (which are being worked on)

1 - HTML Form Learning does not recognise the values for SELECT fields with HTML Forms.

2 - Remove Duplicates in word list tools is disabled.

3 - Update cookies is inactive in HTTP POST, the cookies are currently static, brutus proxy pre-authentification error.

4 - SMB mode will not handle target addresses that are not in UNC format.

5 - In HTTP (FORM/CGI) HTTP status codes such as moved are read but not interpreted.

There are lots more issuesI'll update when I know what they are

DONT use lots of simultaneous connections unless it's beneficial to do so - Usually slow responding targets (like many POP3 servers which have 10 second + failure notification times) are the best candidates.

There are sa mp has crashed error variables to take into account, connection speed, authentication notification speed, server capacity, even your machine's capacity in some scenarios. Very often you will find less connections will give you more speedthis is important.

DON'T use the keepalive/stayconnected options if you are having problems - it is usually better to troubleshoot these things in one authentication per connection mode.

DO use keepalive/stay connected options if you can -they can greatly increase speed.

DO use positive authentication responses in your custom sequences - they are usually more reliable.

DO take note of the error indicators in the bottom right of the brutus main window -if they are flashing too often then consider changing some settings.

DO use a network sniffer if you can - to understand and troubleshoot authentication sequences to various services. Also consider using netcat or telnet to 'manually' authenticate against a service to see exactly what the server is responding with and what you need to tell it.

DO create custom word lists for your specific targets - If the target user(s) is/are known then create user specific wordlists using the built in password generator. Using target specific lists in conjunction with perhaps a list of common passwords probably offers you the best chance of positive authentication in a reasonable amount of time.

DON'T do anything with this tool brutus proxy pre-authentification error you might regret later

Fran�ois PIETTE - TWSocket (a Winsock wrapper that is part of the ICS for Delphi - sprers.eu)

Borland - becasue Delphi is actually not bad

For updates see sprers.eu


Wyszukiwarka

Podobne podstrony:
avr32 gnu toolchain brutus proxy pre-authentification error 3 0 readme
knock knock joke readme
Brutus Overview
Readme
Readme
readme text
readme grissom
Readme
readme
readMeFirst
ReadMe (7)
Readme (16)
README
Readme!
Designography ReadMe
GTG MX 08 readme
README
README
readme

więcej podobnych podstron

0 Comments

Leave a Comment