peschla.net Report : Visit Site


  • Ranking Alexa Global: # 7,906,909

    Server:Apache/2.2.22 (Debia...
    X-Powered-By:PHP/5.4.45-0+deb7u13

    The main IP address: 5.45.97.162,Your server Germany,Karlsruhe ISP:netcup GmbH  TLD:net CountryCode:DE

    The description :foobar et al. -- -- home studienplaner about subscribe testing a rails app on semaphore with parallel_tests we use semaphore for automated testing. just push changes and the tests are automatically ru...

    This report updates in 12-Jun-2018

Created Date:2008-04-16
Changed Date:2017-06-29

Technical data of the peschla.net


Geo IP provides you such as latitude, longitude and ISP (Internet Service Provider) etc. informations. Our GeoIP service found where is host peschla.net. Currently, hosted in Germany and its service provider is netcup GmbH .

Latitude: 53.653228759766
Longitude: 12.802209854126
Country: Germany (DE)
City: Karlsruhe
Region: Mecklenburg-Vorpommern
ISP: netcup GmbH

HTTP Header Analysis


HTTP Header information is a part of HTTP protocol that a user's browser sends to called Apache/2.2.22 (Debian) containing the details of what the browser wants and will accept back from the web server.

Content-Length:27595
X-Powered-By:PHP/5.4.45-0+deb7u13
Set-Cookie:mc_session_ids[default]=b08ae0466620129994ad7761fa4de208983e74d8; expires=Tue, 12-Jun-2018 08:03:58 GMT; path=/; httponly, mc_session_ids[multi][0]=b501cbcabb9aab2795a786d71c63e44b7b698020; expires=Tue, 12-Jun-2018 08:03:58 GMT; path=/, mc_session_ids[multi][1]=0cf847a08e0dfe8cccef0dde28d111f1a1abc44b; expires=Tue, 12-Jun-2018 08:03:58 GMT; path=/, mc_session_ids[multi][2]=dc17f36d782cdb9fb23c929f42c6785e8669a018; expires=Tue, 12-Jun-2018 08:03:58 GMT; path=/, mc_session_ids[multi][3]=07c5c6db4159d565b1d1209d882b88dc7404e52a; expires=Tue, 12-Jun-2018 08:03:58 GMT; path=/, mc_session_ids[multi][4]=2ff1bd6e5bca1f1e6e44dcc96a27c4c664f316cc; expires=Tue, 12-Jun-2018 08:03:58 GMT; path=/
Content-Encoding:gzip
Vary:Accept-Encoding
Keep-Alive:timeout=15, max=100
Server:Apache/2.2.22 (Debian)
Connection:Keep-Alive
Link:; rel="https://api.w.org/"
Date:Tue, 12 Jun 2018 07:58:58 GMT
Content-Type:text/html; charset=UTF-8

DNS

soa:nsa1.schlundtech.de. j_peschl.cs.uni-kl.de. 2017062901 43200 7200 1209600 600
ns:nsa1.schlundtech.de.
nsc1.schlundtech.de.
nsd1.schlundtech.de.
nsb1.schlundtech.de.
ipv4:IP:5.45.97.162
ASN:197540
OWNER:NETCUP-AS netcup GmbH, DE
Country:DE
mx:MX preference = 10, mail exchanger = peschla.net.
MX preference = 10, mail exchanger = 5.45.97.162.

HtmlToText

foobar et al. -- -- home studienplaner about subscribe testing a rails app on semaphore with parallel_tests we use semaphore for automated testing. just push changes and the tests are automatically run. great. but there is a caveat as the test suite grows. semaphore kills processes that run longer than an hour. at this point we integrated parallel_tests to reduce runtime. the result is great, overall runtime decreased to about one third. setting up parallel tests preparing database.yml can’t be done statically as the database name is randomly generated for each run. i wrote a rake task which adds the test_env_number variable to the database name: lib/tasks/testing.rake namespace :testing do task :prepare_db_config do |_t, _args| if rails.env.test? db_config = yaml.load_file('config/database.yml') db_config['test']['database'] += " <% = env [ 'test_env_number' ] %> " unless db_config['test']['database'].ends_with?(" <% = env [ 'test_env_number' ] %> ") file.open('config/database.yml', 'w') { |f| f.write db_config.to_yaml } end end the test environment is prepared by adding the following two lines at the end of the setup thread: setup thread unset spec_opts # optionally if you want to use insights with parallel_tests bundle exec rake testing:prepare_db_config bundle exec rake parallel:drop parallel:create parallel:migrate because we have two threads (i. e. separate machines that can do stuff simultaneously), we pass each thread a different regexp to tell it which tests to run: thread 1 spec_opts = "--format progress --format json --out rspec_report.json" bundle exec rspec spec / features / bundle exec rake parallel:spec [ spec / models / ] as you can see, feature tests are not run with parallel. we often observed test failures due to lost or closed server connections and couldn’t fix it in considerable time. if you have a solution, i’d appreciate your comment. the second thread gets the complementary regexp (and also checks whether everyone behaved by running rubocop): thread 2 bundle exec rake rubocop bundle exec rake parallel:spec [ ^\ ( ?\ ! spec / features\ | spec / models\ ) ] if you are not sure whether you missed tests with your regexps, run your tests once with rspec and then once with parallel_tests for each regexp. if the numbers of the latter add up to the number of tests run by rspec you know you are right. use insights with parallel tests semaphore recently launched a nice feature, insights , which collects runtime data for tests. for rspec it automatically injects the required parameters to generate the report. unfortunately this does not work for the parallel command which runs most of our tests. but after some trial and error i found out how to set it up. 1. generate json reports with parallel parallel can forward parameters to each rspec process. but there is a problem, logging is not properly synced. if all processes write to the same file, the result is not json but a mess. so the idea is to use the test_env_number variable in the report file name for each process. i couldn’t figure out, how to specify that within the .rspec_parallel file (and doubt that it can be done because the parameters are forwarded within single quotes which prevents variable interpolation) but it worked by passing the required parameters directly to the rake task. extend the parallel calls in your test threads this way: specify rspec logger and logging format bundle exec rake parallel:spec [ spec / models / , "--format progress --format json --out parallel_report_ \$ test_env_number.json" ] the first format ensures you get the dots in stdout , the second tells rspec to log the execution details in json format to some file (escaping the $ is crucial here). 2. combining the reports insights expects all runtime data to be in a file called rspec_report.json in the root directory of your project. so we need another rake task to combine all logs: lib/tasks/testing.rake namespace :testing do task :combine_runtime_log , [ :rspec_log_path , :parallel_logs_pattern ] => [ :environment ] do | _t, args | if rails. env . test ? rspec_log_path = pathname ( args [ :rspec_log_path ] ) log_hashes = dir [ args [ :parallel_logs_pattern ] ] . map { | path | json. parse ( pathname ( path ) . read ) } log_hashes << json. parse ( rspec_log_path. read ) if rspec_log_path. exist ? result_hash = log_hashes. reduce { | a, e | a. tap { | m | m [ 'examples' ] + = e [ 'examples' ] } } rspec_log_path. write result_hash. to_json end end end the interesting data is found in the 'examples' key of the json, the other keys are the same in all files. the task above is called in both test thread as last command: source code bundle exec rake testing:combine_runtime_log [ rspec_report.json,parallel_report_ * .json ] the task is written and called such that it also works, if there is no rspec_report.json which is important in cases all tests are run in parallel. it could also be called by the post thread and therefore would only have to be specified once. but then it gets executed even if one test thread fails which we decided against. happy insighting! update (11/30/2015) shortly after i wrote this, semaphore started to set spec_opts environment variable on the machines which overrides the format and logging parameters passed to parallel. so in order to get things running again, you need to unset this variable in your set-up thread and pass the specified options explicitly to the tests run with rspec directly (i adjusted the code snippets above). 0 november 25th, 2015 in ruby and rails | tags: ruby on rails , testing | 1 comment rails: two buttons on a form recently i had a create form where one should be able to chose an associated model from a large set of possible values. it wouldn’t be too pleasant to have a drop down listing all these options. my idea was to render paginated radio buttons and a search field to limit the items available. submitting a query shouldn’t wipe out your previous inputs. so, both, create and search button, submit to the same action which then must decide by the button pressed what to do. as searching is a common concern i put this into a – surprise – concern. form imagine a politics discussion board. you think it would be a nice feature if users could award prizes to politicians to honor their efforts. maybe someone wants to award a politician for his achievements as a servant of a neoliberal lobby, e.g. the financial sector. so a user creates the „disguised private investor bailout magician“ prize and needs to select the awardee. there are a lot of candidates, the german bundestag alone currently already has 631 members. so here is how the form could look like: form with entered award name and description the labels and inputs for tile and description are regular form elements. here is the code for the search field and the select options, pagination is done by kaminari : source code <% = render layout : 'shared/paginated' , locals: { collection: @politicians , entry_name: 'politicians' , params: { action: : new } } do %> <div class="input-group"> <% = text_field_tag :query , params [ :query ] , class : 'form-control' , placeholder: 'search politician...' %> <div class="input-group-btn"> <% = button_tag type: 'submit' , class : 'btn btn-default' , name: :commit , value: :search_politicians do %> search <% end %> </div> </div> <div class="btn-group-vertical form-control-static" role="group" data-toggle="buttons"> <% @politicians . each do | politician | %> <% active = politician. id . to_s == @award . awardee_id %> <% classes = 'btn btn-default text-left-important' + ( active ? ' active' : '' ) %> <% = label_tag '' , class : classes do %> <% = f. radio_button 'awardee' , politician. id %><% = "#{politician.last_name}, #{politician.first_name}" %> <% end %> <% end %> </div> <% end %> the important parts are the text input named query and the button with value search_politicians to submit the query. controller here is what the controller actions for new and create look like – the magic happens in line 4 where the searchandredirect concern is called: source code class awardscontroller < applicationcontroller include searchandredirect search_and_redirect commit: :search_politicians , redirects: { create: :new } , forwarded_params: [ :award , :query ] def new if params [ :award ] @award = award. new ( params. require ( :award ) . permit ( :title , :description , :awardee_id ) ) else @award = award. new end @politicians = politician. all . page ( params [ :page ] ) search = "%#{params[:query]}%" @politicians = @politicians . where ( "first_name like ? or last_name like ?" , search, search ) if params [ :query ] . present ? end def create @award = award. new ( params. require ( :award ) . permit ( :title , :description , :awardee_id ) ) if @award . save redirect_to :award , notice: 'award was created.' else @politicians = politician. all . page ( params [ :page ] ) render 'new' end end end search_and_redirect is called with three parameters: :commit, :redirects and :forwarded_params. the :commit parameter tells the concern for which submit actions it should get active. so if a form is submitted with ’search_politicians‘ as commit value it will forward the specified parameters :award (to maintain previous inputs) and :query to the requested action. in this case this rule applies if a form was submitted to the create action and will be redirected to new. as :redirects takes a hash, you can specify multiple redirects for the same forwarding rule. concern finally, here is the concern’s code: source code module searchandredirect extend activesupport::concern module classmethods def search_and_redirect ( options ) options. deep_stringify_keys ! commits = ( options [ 'commits' ] || [ options [ 'commit' ] ] ) . map ( & :to_s ) before_filter only: options [ 'redirects' ] . keys do action = params [ :action ] if commits. include ? ( params [ :commit ] ) && options [ 'redirects' ] . keys . include ? ( action ) forwarded_params = options [ 'forwarded_params' ] . reduce ( { } ) { | memo, param | memo. tap { | m | m [ param ] = params [ param ] } } redirect_to ( { action: options [ 'redirects' ] [ action ] } . merge ( forwarded_params ) ) end end end end end now, we can search for wolfram without having to rewrite the whole description again: form with preserved inputs after searching 0 august 6th, 2015 in programming , ruby and rails | tags: ruby on rails | no comments translating attribute values in rails: human_attribute_values some model attributes in the project i’m working on have values which have to be mapped in the views. there are boolean values and enums which are not really meaningful as is. rails already provides built in support for mapping attribute and model names via model.model_name.human and model.human_attribute_name . because they use the i18n api you can specify different translations for each locale. now, to map values, we could write helpers doing that. but to have the same flexibility, we would need to somehow use the i18n api, too. instead of doing this in our custom application code, i thought it would be nice to have a similar mechanism for values on models and instances. maybe something like human_attribute_value . so i wrote my first tiny gem: human_attribute_values . lookup meachanism following the conventions of the built in translation mechanisms, the translations are first looked up under activerecord.values.model_name.attribute_name.value , then values.attribute_name.value and if there is no translation found, the stringified value itself is returned. i see the primary use case in translating booleans and enums, but it also supports numbers and, of course, strings. a bonus is the way sti models are handled: rails steps up through the translations for ancestors until it finds a mapping. the implementation is a small adjustment to the code of human_attribute_name . because of this i’ll probably have to find a way to provide different implementations for different rails versions someday. 8 februar 1st, 2015 in programming , ruby and rails | tags: gems , i18n , ruby on rails | 1 comment fancy promptline with git status details i’ve been using powerline-shell for quite a while and like it a lot. i get aware of that every time i use a terminal which does not tell me which branch i’m on. some days ago i stumbled upon promptline.vim and as i’m also using vim-airline i gave it a try. promptline.vim exports a shell script from the current airline settings. after sourcing it into a shell you should immediately see the updated promptline (if there are random characters instead of fancy symbols, you need to install powerline symbols first): export promptline settings in vim :promptlinesnapshot ~/.shell_prompt.sh airline automatically load the promptline when a shell is opened: load promptline in ~/.bashrc # load promptline if available [ -f ~ / .shell_prompt.sh ] && source ~ / .shell_prompt.sh vim source i use the light solarized theme created by ethan schoonover and invoked the export from vim looking like this: vim with airline status bar result i tweaked the result a bit. originally it only indicates whether, and if any, which changes have been made. i added coloring for the git slice, red for pending changes (both staged and unstaged): +3 indicates three files with unstaged chanegs •2 tells about two files with staged changes … indicates that there are untracked files bash with customized promptline in a clean working directory or if there are only untracked files, the slice is green: promptline for a directory with untracked files/no changes customization my .shell_prompt.sh ( download ) looks like this now: .shell_prompt.sh # # this shell prompt config file was created by promptline.vim # function __promptline_last_exit_code { [ [ $last_exit_code -gt 0 ] ] || return 1 ; printf "%s" " $last_exit_code " } function __promptline_ps1 { local slice_prefix slice_empty_prefix slice_joiner slice_suffix is_prompt_empty = 1 # section "aa" header slice_prefix = " ${aa_bg} ${sep} ${aa_fg} ${aa_bg} ${space} " slice_suffix = " $space ${aa_sep_fg} " slice_joiner = " ${aa_fg} ${aa_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${aa_fg} ${aa_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "a" slices __promptline_wrapper " $(if [[ -n ${zsh_version-} ]]; then print %m; elif [[ -n ${fish_version-} ]]; then hostname -s; else printf "%s" \\a; fi ) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # section "a" header slice_prefix = " ${a_bg} ${sep} ${a_fg} ${a_bg} ${space} " slice_suffix = " $space ${a_sep_fg} " slice_joiner = " ${a_fg} ${a_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${a_fg} ${a_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "a" slices __promptline_wrapper " $(if [[ -n ${zsh_version-} ]]; then print %m; elif [[ -n ${fish_version-} ]]; then hostname -s; else printf "%s" \\h; fi ) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # section "b" header slice_prefix = " ${b_bg} ${sep} ${b_fg} ${b_bg} ${space} " slice_suffix = " $space ${b_sep_fg} " slice_joiner = " ${b_fg} ${b_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${b_fg} ${b_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "b" slices __promptline_wrapper " $user " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # section "c" header slice_prefix = " ${c_bg} ${sep} ${c_fg} ${c_bg} ${space} " slice_suffix = " $space ${c_sep_fg} " slice_joiner = " ${c_fg} ${c_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${c_fg} ${c_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "c" slices __promptline_wrapper " $(__promptline_cwd) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # this will prepare the variables for __promptline_git_status and adjust the bg coloring for section "y" __determine_git_colors # section "y" header slice_prefix = " ${y_bg} ${sep} ${y_fg} ${y_bg} ${space} " slice_suffix = " $space ${y_sep_fg} " slice_joiner = " ${y_fg} ${y_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${y_fg} ${y_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "y" slices __promptline_wrapper " $(__promptline_vcs_branch) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } __promptline_wrapper " $(__promptline_git_status) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # section "warn" header slice_prefix = " ${warn_bg} ${sep} ${warn_fg} ${warn_bg} ${space} " slice_suffix = " $space ${warn_sep_fg} " slice_joiner = " ${warn_fg} ${warn_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${warn_fg} ${warn_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "warn" slices __promptline_wrapper " $(__promptline_last_exit_code) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # close sections printf "%s" " ${reset_bg} ${sep} $reset $space " } function __promptline_vcs_branch { local branch local branch_symbol = " " # git if hash git 2 >/ dev / null; then if branch =$ ( { git symbolic-ref --quiet head || git rev-parse --short head; } 2 >/ dev / null ) ; then branch = ${branch##*/} printf "%s" " ${branch_symbol} ${branch:-unknown} " return fi fi return 1 } function __determine_git_colors { [ [ $ ( git rev-parse --is-inside-work-tree 2 >/ dev / null ) == true ] ] || return 1 __promptline_git_unmerged_count =0 __promptline_git_modified_count =0 __promptline_git_has_untracked_files =0 __promptline_git_added_count =0 __promptline_git_is_clean = "" set -- $ ( git rev-list --left-right --count @ { upstream } ...head 2 >/ dev / null ) __promptline_git_behind_count =$1 __promptline_git_ahead_count =$ 2 # added (a), copied (c), deleted (d), modified (m), renamed (r), changed (t), unmerged (u), unknown (x), broken (b) while read line; do case " $line " in m * ) __promptline_git_modified_count =$ ( ( $__promptline_git_modified_count + 1 ) ) ;; u * ) __promptline_git_unmerged_count =$ ( ( $__promptline_git_unmerged_count + 1 ) ) ;; esac done < < ( git diff --name-status ) while read line; do case " $line " in * ) __promptline_git_added_count =$ ( ( $__promptline_git_added_count + 1 ) ) ;; esac done < < ( git diff --name-status --cached ) if [ -n " $(git ls-files --others --exclude-standard) " ] ; then __promptline_git_has_untracked_files =1 fi if [ $ ( ( __promptline_git_unmerged_count + __promptline_git_modified_count + __promptline_git_has_untracked_files + __promptline_git_added_count ) ) -eq 0 ] ; then __promptline_git_is_clean =1 fi y_fg = " ${wrap} 38;5;246 ${end_wrap} " # set green background for the branch info if there are no changes or only untracked files if [ [ $ ( ( __promptline_git_is_clean + __promptline_git_has_untracked_files ) ) -gt 0 ] ] ; then y_bg = " ${wrap} 48;5;194 ${end_wrap} " y_sep_fg = " ${wrap} 38;5;194 ${end_wrap} " fi # set red background for the branch info if there are unstaged or staged (but not yet committed) changes if [ [ $ ( ( __promptline_git_modified_count + __promptline_git_added_count ) ) -gt 0 ] ] ; then y_bg = " ${wrap} 48;5;224 ${end_wrap} " y_sep_fg = " ${wrap} 38;5;224 ${end_wrap} " #y_bg="${wrap}48;5;217${end_wrap}" #y_sep_fg="${wrap}38;5;217${end_wrap}" fi } function __promptline_git_status { [ [ $ ( git rev-parse --is-inside-work-tree 2 >/ dev / null ) == true ] ] || return 1 local added_symbol = "●" local unmerged_symbol = "✖" local modified_symbol = "✚" local clean_symbol = "✔" local has_untracked_files_symbol = "…" local ahead_symbol = "↑" local behind_symbol = "↓" local leading_whitespace = "" [ [ $__promptline_git_ahead_count -gt 0 ] ] && { printf "%s" " $leading_whitespace $ahead_symbol $__promptline_git_ahead_count " ; leading_whitespace = " " ; } [ [ $__promptline_git_behind_count -gt 0 ] ] && { printf "%s" " $leading_whitespace $behind_symbol $__promptline_git_behind_count " ; leading_whitespace = " " ; } [ [ $__promptline_git_modified_count -gt 0 ] ] && { printf "%s" " $leading_whitespace $modified_symbol $__promptline_git_modified_count " ; leading_whitespace = " " ; } [ [ $__promptline_git_unmerged_count -gt 0 ] ] && { printf "%s" " $leading_whitespace $unmerged_symbol $__promptline_git_unmerged_count " ; leading_whitespace = " " ; } [ [ $__promptline_git_added_count -gt 0 ] ] && { printf "%s" " $leading_whitespace $added_symbol $__promptline_git_added_count " ; leading_whitespace = " " ; } [ [ $__promptline_git_has_untracked_files -gt 0 ] ] && { printf "%s" " $leading_whitespace $has_untracked_files_symbol " ; leading_whitespace = " " ; } [ [ $__promptline_git_is_clean -gt 0 ] ] && { printf "%s" " $leading_whitespace $clean_symbol " ; leading_whitespace = " " ; } } function __promptline_cwd { local dir_limit = "3" local truncation = "⋯" local first_char local part_count =0 local formatted_cwd = "" local dir_sep = "  " local tilde = "~" local cwd = " ${pwd/#$home/$tilde} " # get first char of the path, i.e. tilde or slash [ [ -n ${zsh_version-} ] ] && first_char = $cwd [ 1,1 ] || first_char = ${cwd::1} # remove leading tilde cwd = " ${cwd#\~} " while [ [ " $cwd " == */* && " $cwd " ! = "/" ] ] ; do # pop off last part of cwd local part = " ${cwd##*/} " cwd = " ${cwd%/*} " formatted_cwd = " $dir_sep $part $formatted_cwd " part_count =$ ( ( part_count+1 ) ) [ [ $part_count -eq $dir_limit ] ] && first_char = " $truncation " && break done printf "%s" " $first_char $formatted_cwd " } function __promptline_left_prompt { local slice_prefix slice_empty_prefix slice_joiner slice_suffix is_prompt_empty = 1 # section "a" header slice_prefix = " ${a_bg} ${sep} ${a_fg} ${a_bg} ${space} " slice_suffix = " $space ${a_sep_fg} " slice_joiner = " ${a_fg} ${a_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${a_fg} ${a_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "a" slices __promptline_wrapper " $(if [[ -n ${zsh_version-} ]]; then print %m; elif [[ -n ${fish_version-} ]]; then hostname -s; else printf "%s" \\h; fi ) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # section "b" header slice_prefix = " ${b_bg} ${sep} ${b_fg} ${b_bg} ${space} " slice_suffix = " $space ${b_sep_fg} " slice_joiner = " ${b_fg} ${b_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${b_fg} ${b_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "b" slices __promptline_wrapper " $user " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # section "c" header slice_prefix = " ${c_bg} ${sep} ${c_fg} ${c_bg} ${space} " slice_suffix = " $space ${c_sep_fg} " slice_joiner = " ${c_fg} ${c_bg} ${alt_sep} ${space} " slice_empty_prefix = " ${c_fg} ${c_bg} ${space} " [ $is_prompt_empty -eq 1 ] && slice_prefix = " $slice_empty_prefix " # section "c" slices __promptline_wrapper " $(__promptline_cwd) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; is_prompt_empty = 0 ; } # close sections printf "%s" " ${reset_bg} ${sep} $reset $space " } function __promptline_wrapper { # wrap the text in $1 with $2 and $3, only if $1 is not empty # $2 and $3 typically contain non-content-text, like color escape codes and separators [ [ -n "$1" ] ] || return 1 printf "%s" " ${2} ${1} ${3} " } function __promptline_right_prompt { local slice_prefix slice_empty_prefix slice_joiner slice_suffix # section "warn" header slice_prefix = " ${warn_sep_fg} ${rsep} ${warn_fg} ${warn_bg} ${space} " slice_suffix = " $space ${warn_sep_fg} " slice_joiner = " ${warn_fg} ${warn_bg} ${alt_rsep} ${space} " slice_empty_prefix = "" # section "warn" slices __promptline_wrapper " $(__promptline_last_exit_code) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; } # section "y" header slice_prefix = " ${y_sep_fg} ${rsep} ${y_fg} ${y_bg} ${space} " slice_suffix = " $space ${y_sep_fg} " slice_joiner = " ${y_fg} ${y_bg} ${alt_rsep} ${space} " slice_empty_prefix = "" # section "y" slices __promptline_wrapper " $(__promptline_vcs_branch) " " $slice_prefix " " $slice_suffix " && { slice_prefix = " $slice_joiner " ; } # close sections printf "%s" " $reset " } function __promptline { local last_exit_code = " ${promptline_last_exit_code:-$?} " local esc =$ '[' end_esc =m if [ [ -n ${zsh_version-} ] ] ; then local noprint = '%{' end_noprint = '%}' elif [ [ -n ${fish_version-} ] ] ; then local noprint = '' end_noprint = '' else local noprint = '\[' end_noprint = '\]' fi local wrap = " $noprint $esc " end_wrap = " $end_esc $end_noprint " local space = " " local sep = "" local rsep = "" local alt_sep = "" local alt_rsep = "" local reset = " ${wrap} 0 ${end_wrap} " local reset_bg = " ${wrap} 49 ${end_wrap} " local aa_fg = " ${wrap} 38;5;7 ${end_wrap} " local aa_bg = " ${wrap} 48;5;246 ${end_wrap} " local aa_sep_fg = " ${wrap} 38;5;246 ${end_wrap} " local a_fg = " ${wrap} 38;5;7 ${end_wrap} " local a_bg = " ${wrap} 48;5;11 ${end_wrap} " local a_sep_fg = " ${wrap} 38;5;11 ${end_wrap} " local b_fg = " ${wrap} 38;5;11 ${end_wrap} " # set red background for root if [ $uid == 0 ] ; then local b_bg = " ${wrap} 48;5;210 ${end_wrap} " local b_sep_fg = " ${wrap} 38;5;210 ${end_wrap} " # set light green background for anyone else else local b_bg = " ${wrap} 48;5;187 ${end_wrap} " local b_sep_fg = " ${wrap} 38;5;187 ${end_wrap} " fi local c_fg = " ${wrap} 38;5;14 ${end_wrap} " local c_bg = " ${wrap} 48;5;7 ${end_wrap} " local c_sep_fg = " ${wrap} 38;5;7 ${end_wrap} " local warn_fg = " ${wrap} 38;5;15 ${end_wrap} " local warn_bg = " ${wrap} 48;5;9 ${end_wrap} " local warn_sep_fg = " ${wrap} 38;5;9 ${end_wrap} " local y_fg = " ${wrap} 38;5;14 ${end_wrap} " local y_bg = " ${wrap} 48;5;14 ${end_wrap} " local y_sep_fg = " ${wrap} 38;5;14 ${end_wrap} " if [ [ -n ${zsh_version-} ] ] ; then prompt = " $(__promptline_left_prompt) " rprompt = " $(__promptline_right_prompt) " elif [ [ -n ${fish_version-} ] ] ; then if [ [ -n "$1" ] ] ; then [ [ "$1" = "left" ] ] && __promptline_left_prompt || __promptline_right_prompt else __promptline_ps1 fi else ps1 = " $(__promptline_ps1) " fi } if [ [ -n ${zsh_version-} ] ] ; then if [ [ ! ${precmd_functions[(r)__promptline]} == __promptline ] ] ; then precmd_functions+= ( __promptline ) fi elif [ [ -n ${fish_version-} ] ] ; then __promptline "$1" else if [ [ ! " $prompt_command " == * __promptline * ] ] ; then prompt_command = '__promptline;' $ '\n' " $prompt_command " fi fi changes: line 14-16: add slice with current time (only works in bash as is) line 71-137: add functions to determine git status line 39: determine git details and coloring for ‚y‘-section before rendering starts line 45: call git status rendering line 234-259: changed some colors, added ‚aa‘-colors, user name background depends on being root or not i took the logic for the git status slice from promptline.vim and adjusted it so that the coloring for the slice is determined before its rendering starts. colors can be adjusted by changing the third value of the tuples, a nice cheat-sheet can be found here . 9 november 13th, 2014 in programming | tags: bash , git , shell | 4 comments apache2 as a transparent http-gateway imagine you want to provide some content but at the same time hide its true origin. sounds ridiculous? well, there may be good reasons to do so: hide the original server from the net for security reasons circumvent or help to circumvent location based censorship make content of some unsuitable cms available to authorized users only (estimated effort to switch cms is about ‚one hundred before thousand‘ – the largest number known to mankind) from the technical perspective the actual reason is not so important – let’s get on with the details: setting up the gateway if you want to use the path /proxy/ as entry point for your gateway to www.example.com then you need to enable mod_proxy and mod_proxy_http and define passproxy and passproxyreverse directives for that: basic gateway configuration < location /proxy/> proxypass http://www.example.com/ proxypassreverse http://www.example.com/ </ location > an ssl remote additionally needs mod_proxy_connect and the following directives in the virtual host: ssl settings # required sslproxyengine on # recommended sslproxycheckpeercn on sslproxycheckpeerexpire on rewrite links in the returned page links and referenced style sheets in the retrieved page will point to the proxied url. using mod_proxy_html apache2 can rewrite them to also go through the proxy. for me the perl implementation worked better (needs mod_perl ): rewrite links perlinputfilterhandler apache2::modproxyperlhtml perloutputfilterhandler apache2::modproxyperlhtml < location /proxy/> # inflate and deflate enables processing of compressed pages setoutputfilter inflate;proxy-html;deflate # multiple mappings and use of regular expressions possible perladdvar proxyhtmlurlmap "http://www.example.com/ /proxy/" </ location > the complete gateway configuration: gateway configuration sslproxyengine on sslproxycheckpeercn on sslproxycheckpeerexpire on perlinputfilterhandler apache2::modproxyperlhtml perloutputfilterhandler apache2::modproxyperlhtml < location /proxy/> proxypass http://www.example.com/ proxypassreverse http://www.example.com/ setoutputfilter inflate;proxy-html;deflate perladdvar proxyhtmlurlmap "http://www.example.com/ /proxy/" </ location > further notes if the proxied site relies on cookies, then you need to configure proxypassreversecookiedomain and proxypassreversecookiepath . proxypassreverse will only rewrite location , content-location and uri headers. if you need any other headers rewritten, you must put additional measures into place. docs for the used proxy modules: mod_proxy , mod_proxy_http , mod_proxy_connect 5 september 16th, 2014 in administration | tags: apache2 , proxy , webserver | no comments kabelbw, unitiymedia – alles dasselbe!? die letzte mail, die mir mitteilte, meine neue kabelbw-rechnung sei nun online abrufbar, verwies mich auf https://app.unitymedia.de/kundencenter/sitzung/anmelden . mein login-versuch wurde mit „logindaten falsch“ quittiert. was solls, passwörter vergisst man schon mal, also rücksetzfunktion benutzen und auf ein neues. jetzt mit ganz sicher den richtigen zugangsdaten – sollte man meinen. wieder dieselbe fehlermeldung. ende vom lied: https://app.kabelbw.de/kundencenter/sitzung/anmelden sieht zwar gleich aus, ist aber nicht dasselbe. dort funktionieren dann auch die über die unitymedia-domain geänderten zugangsdaten. klingt komisch, ist aber so. 7 juli 9th, 2014 in around the internet | no comments git: pull immature work without dirty commit yesterday i wanted to continue ongoing work i started on a different machine. but the code was not clean when i left, so i didn’t commit. to prevent a dirty commit i temporarily committed my work on the remote machine, then pulled that commit directly from there and afterwards discarded the temporary commit on the other machine. no bad commit left: on the remote machine $ git add . $ git commit -m "temporary commit" on the local machine $ git checkout -b my_branch $ git pull ssh://[email protected]:port/path/to/repository my_branch $ git reset 'head^' a final git reset 'head^' on the remote makes the commit to have never happened. 8 april 11th, 2014 in programming | no comments parsing pdf text with coordinates in ruby when i was looking for a gem to parse pdf text, pdf-reader turned out to be a good choice. unfortunately there is only a simple text output for pages. to process content using text positions, a little customization is required to retrieve them. customized subclasses (1) the page class provides a #walk method which takes visitors that get called with the rendering instructions for the page content. to get access to the text runs on a page, a subclass of pagetextreceiver can be used, which only adds readers for the @characters and @mediabox attributes: custompagetextreceiver class custompagetextreceiver < pdf::reader::pagetextreceiver attr_reader :characters , :mediabox end with these two values, pagelayout can be instantiated. it merges found characters into word groups (runs). to retrieve these runs afterwards, we also need a slighty chattier subclass: custompagelayout class custompagelayout < pdf::reader::pagelayout attr_reader :runs end custom subclasses (2) using these two subclasses we could now retrieve the text from pdfs together with its coordinates. but i observed two drawbacks with the original implementations. first, i had files for which the outputted runs contained duplicates which seems to stem from text with shadowing. this can be handled by rejecting duplicates while pagelayout processes them: custompagelayout (2) class custompagelayout < pdf::reader::pagelayout attr_reader :runs def group_chars_into_runs ( chars ) # filter out duplicate chars before going on with regular logic, # seems to happen with shadowed text chars. uniq ! { | val | { x: val. x , y: val. y , text: val. text } } super end end second, in some cases pdf-reader missed spaces in the parsed text, which i think may happen because originally it calculates spaces itself and pagetextreceiver discards spaces found in the pdf stream. i found it to be more reliable to keep spaces and strip extra spaces during further processing: pagetextreceiverkeepspaces class pagetextreceiverkeepspaces < pdf::reader::pagetextreceiver # we must expose the characters and mediabox attributes to instantiate pagelayout attr_reader :characters , :mediabox private def internal_show_text ( string ) if @state . current_font . nil ? raise pdf::reader::malformedpdferror , "current font is invalid" end glyphs = @state . current_font . unpack ( string ) glyphs. each_with_index do | glyph_code, index | # paint the current glyph newx, newy = @state . trm_transform ( 0,0 ) utf8_chars = @state . current_font . to_utf8 ( glyph_code ) # apply to glyph displacment for the current glyph so the next # glyph will appear in the correct position glyph_width = @state . current_font . glyph_width ( glyph_code ) / 1000.0 th = 1 scaled_glyph_width = glyph_width * @state . font_size * th # modification to original pdf-reader code which accidentally removes spaces in some cases # unless utf8_chars == space @characters << pdf::reader::textrun . new ( newx, newy, scaled_glyph_width, @state . font_size , utf8_chars ) # end @state . process_glyph_displacement ( glyph_width, 0, utf8_chars == space ) end end end it is the original code except for the two highlighted lines which are commented out to keep also spaces. further processing based on the customized pagetextreceiver and pagelayout i wrote a basic processor which takes the runs of each page and brings them in a structured form for further processing. the processor class can be found in the following script, invoke with ./script.rb /path/to/some.pdf when the pdf-reader gem is installed: sample pdf parser script #! /usr/bin/ruby require 'pdf-reader' class custompagelayout < pdf::reader::pagelayout attr_reader :runs # we need to filter duplicate characters which seem to be caused by shadowing def group_chars_into_runs ( chars ) # filter out duplicate chars before going on with regular logic, # seems to happen with shadowed text chars. uniq ! { | val | { x: val. x , y: val. y , text: val. text } } super end end class pagetextreceiverkeepspaces < pdf::reader::pagetextreceiver # we must expose the characters and mediabox attributes to instantiate pagelayout attr_reader :characters , :mediabox private def internal_show_text ( string ) if @state . current_font . nil ? raise pdf::reader::malformedpdferror , "current font is invalid" end glyphs = @state . current_font . unpack ( string ) glyphs. each_with_index do | glyph_code, index | # paint the current glyph newx, newy = @state . trm_transform ( 0,0 ) utf8_chars = @state . current_font . to_utf8 ( glyph_code ) # apply to glyph displacment for the current glyph so the next # glyph will appear in the correct position glyph_width = @state . current_font . glyph_width ( glyph_code ) / 1000.0 th = 1 scaled_glyph_width = glyph_width * @state . font_size * th # modification to the original pdf-reader code which otherwise accidentally removes spaces in some cases # unless utf8_chars == space @characters << pdf::reader::textrun . new ( newx, newy, scaled_glyph_width, @state . font_size , utf8_chars ) # end @state . process_glyph_displacement ( glyph_width, 0, utf8_chars == space ) end end end class pdftextprocessor max_kerning_distance = 10 # experimental value # pages may specify which pages to actually parse (zero based) # [0, 3] will process only the first and fourth page if present def self . process ( pdf_io, pages = nil ) pdf_io. rewind reader = pdf::reader . new ( pdf_io ) fail 'could not find any pages in the given document' if reader. pages . empty ? processed_pages = [ ] text_receiver = pagetextreceiverkeepspaces. new requested_pages = pages ? reader. pages . values_at ( * pages ) : reader. pages requested_pages. each do | page | unless page. nil ? page. walk ( text_receiver ) runs = custompagelayout. new ( text_receiver. characters , text_receiver. mediabox ) . runs # sort text runs from top left to bottom right # read as: if both runs are on the same line first take the leftmost, else the uppermost - (0,0) is bottom left runs. sort ! { | r1, r2 | r2. y == r1. y ? r1. x <=> r2. x : r2. y <=> r1. y } # group runs by lines and merge those that are close to each other lines_hash = { } runs. each do | run | lines_hash [ run. y ] || = [ ] # runs that are very close to each other are considered to belong to the same text "block" if lines_hash [ run. y ] . empty ? || ( lines_hash [ run. y ] . last . last . endx + max_kerning_distance < run. x ) lines_hash [ run. y ] << [ run ] else lines_hash [ run. y ] . last << run end end lines = [ ] lines_hash. each do | y, run_groups | lines << { y: y, text_groups: [ ] } run_groups. each do | run_group | group_text = run_group. map { | run | run. text } . join ( '' ) . strip lines. last [ :text_groups ] << ( { x: run_group. first . x , width: run_group. last . endx - run_group. first . x , text: group_text, } ) unless group_text. empty ? end end # consistent indexing with pages param and reader.pages selection processed_pages << { page: page. number , lines: lines } end end processed_pages end end if file . exists ? ( argv [ 0 ] ) file = file . open ( argv [ 0 ] ) pages = pdftextprocessor. process ( file ) puts pages puts "parsed #{pages.count} pages" else puts "cannot open file '#{argv[0]}' (or no file given)" end the overall output is an array of hashes where each hash covers the text on a page. each page hash has an array of lines in which each line is also represented by an hash. a line has an y-position and an array of text groups found in this line. lines are sorted from top to bottom ([0,0] is on the bottom left) and text groups from left to right: example page hash { page: 1 , lines: [ { y: 771.4006 , text_groups: [ { x: 60.7191 , width: 164.6489200000004 , text: "some text on the left" } , { x: 414.8391 , width: 119.76381600000008 , text: "some text on the right" } ] } , { y: 750.7606 , text_groups: [ { x: 60.7191 , width: 88.51979999999986 , text: "more text" } ] } ] } 7 april 8th, 2014 in programming , ruby and rails | tags: parsing , pdf , ruby | 6 comments problems with sasl authentication for postfix running a server for a long time certainly, if not probably means migration some day. usually, at least for private servers, this is fairly simple. config files and even databases can be moved without an export/import cycle. but when it comes to users and access rights, there may be dragons. my dragon lurked at the interplay between postfix and the sasl authentication daemon which broke the authentication of mail clients. when a client tried to send a mail, the mail log said: sasl login authentication failed: generic failure browsing for a solution i came across tutorials describing the setup of a mail server from scratch. i started to check whether i accidentally broke a step and indeed, the postfix user could not access salsauthd’s socket. i fixed this with the following two commands: chgrp sasl /var/spool/postfix/var/run/saslauthd adduser postfix sasl after restarting postfix and saslauthd, everything worked fine again. more fun with postfix and sasl authentication because i ignored the problem for some time, i had a different problem first: the way how to configure sasl authentication changed (current setup is debian wheezy with postfix 2.9). first postfix told me, it could not even find an auth mechanism: sasl plain authentication failed: no mechanism available this could be fixed by changing the config ( /etc/postfix/sasl/smtpd.conf on my machine) from: old sasl config pwcheck_method: saslauthd saslauthd_path: /var/run/saslauthd/mux log_level: 3 mech_list: plain login allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: db_user sql_passwd: db_password sql_database: db_name sql_select: select password from mailbox where username = '%u' to: new sasl config pwcheck_method: saslauthd saslauthd_path: /var/run/saslauthd/mux log_level: 3 mech_list: plain login allow_plaintext: true auxprop_plugin: sql sql_engine: mysql sql_hostnames: 127.0.0.1 sql_user: db_user sql_passwd: db_password sql_database: db_name sql_select: select password from mailbox where username = '%u@%r' 10 januar 9th, 2014 in administration | tags: postfix , thunderbird | no comments wlan-verbindungsabbrüche mit fritzbox 6340 von meinem neuen internetanbieter habe ich eine fritzbox 6340 als router bekommen. an und für sich ist das gerät ok, man kann nicht alles frei konfigurieren aber komplett kastriert ist das menü auch nicht. allerdings stellte sich dann heraus, dass gefühlt jeder dritte bis sechste versuch eine wlan-verbindung aufzubauen mit der meldung scheiterte, dass die authentifizierung fehlgeschlagen sei. nach einem neustart des wlans über den taster am router klappte es dann. allerdings ist das keine dauerlösung. da das problem bei voller signalstärke und auch direkt neben dem router auftritt und auch so sachen wie kanalwechsel nichts gebracht haben, habe ich mich an den support gewandt. der hat mir vorgeschlagen, es mit einem kürzeren passwort zu versuchen, weil das wohl mal in einem fall geholfen hat – vodoo oder fieser firmwarefehler? – auf jeden fall keine sinnvolle alternative. im log (erweiterte menüansicht, „system“ -> „ereignisse“) fand ich dann schließlich den eintrag: wlan-anmeldung ist gescheitert. maximale anzahl gleichzeitig nutzbarer wlan-geräte erreicht. #002. das kam mir komisch vor, denn verbunden (bzw. authentifiziert) sind im (w)lan höchstens 10 geräte. es scheint so, als ob die fritzbox durch verbindungsanfragen anderer geräte blockiert wird. die geräteliste unter „wlan“ -> „funknetz“ war voll von einträgen fremder geräte, die sich natürlich nicht authentifizieren konnten. ich würde eigentlich erwarten, dass das kein problem darstellt, aber wenn man hier den wlan-zugang auf bekannte geräte beschränkt und alle einträge zu fremdgeräten löscht, tritt das problem nicht mehr auf. 35 juni 30th, 2013 in administration | tags: fritzbox , wlan | 1 comment ←older author jonas categories administration applications around the internet programming hacking chromium mozilla extension development ruby and rails university archive november 2015 august 2015 februar 2015 november 2014 september 2014 juli 2014 april 2014 januar 2014 juni 2013 november 2012 oktober 2012 september 2012 juni 2012 mai 2012 april 2012 februar 2012 januar 2012 dezember 2011 november 2011 oktober 2011 september 2011 august 2011 juli 2011 blogroll computer stone age planet fachschaft meta anmelden valid xhtml xfn -- wordpress -- is powered by wordpress | entries (rss) and comments (rss) copyright © 2008. all right reserved. theme by deniart

URL analysis for peschla.net


http://blog.peschla.net/#codesyntax_15
http://blog.peschla.net/#codesyntax_14
http://blog.peschla.net/#codesyntax_17
http://blog.peschla.net/#codesyntax_16
http://blog.peschla.net/#codesyntax_11
http://blog.peschla.net/2015/11/parallel-tests-for-rails-on-semaphore/#comments
http://blog.peschla.net/#codesyntax_13
http://blog.peschla.net/#codesyntax_12
http://blog.peschla.net/about/
http://blog.peschla.net/#codesyntax_19
http://blog.peschla.net/#codesyntax_18
http://blog.peschla.net/tag/pdf/
http://blog.peschla.net/#codesyntax_10
http://blog.peschla.net/2015/08/
http://blog.peschla.net/2011/10/

Whois Information


Whois is a protocol that is access to registering information. You can reach when the website was registered, when it will be expire, what is contact details of the site with the following informations. In a nutshell, it includes these informations;

Domain Name: PESCHLA.NET
Registry Domain ID: 1451096836_DOMAIN_NET-VRSN
Registrar WHOIS Server: whois.psi-usa.info
Registrar URL: http://www.psi-usa.info
Updated Date: 2017-06-29T08:19:20Z
Creation Date: 2008-04-16T17:33:51Z
Registry Expiry Date: 2018-04-16T17:33:51Z
Registrar: PSI-USA, Inc. dba Domain Robot
Registrar IANA ID: 151
Registrar Abuse Contact Email: [email protected]
Registrar Abuse Contact Phone: +49.94159559482
Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
Name Server: NSA1.SCHLUNDTECH.DE
Name Server: NSB1.SCHLUNDTECH.DE
Name Server: NSC1.SCHLUNDTECH.DE
Name Server: NSD1.SCHLUNDTECH.DE
DNSSEC: unsigned
URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of whois database: 2018-01-11T15:09:30Z <<<

For more information on Whois status codes, please visit https://icann.org/epp

NOTICE: The expiration date displayed in this record is the date the
registrar's sponsorship of the domain name registration in the registry is
currently set to expire. This date does not necessarily reflect the expiration
date of the domain name registrant's agreement with the sponsoring
registrar. Users may consult the sponsoring registrar's Whois database to
view the registrar's reported date of expiration for this registration.

TERMS OF USE: You are not authorized to access or query our Whois
database through the use of electronic processes that are high-volume and
automated except as reasonably necessary to register domain names or
modify existing registrations; the Data in VeriSign Global Registry
Services' ("VeriSign") Whois database is provided by VeriSign for
information purposes only, and to assist persons in obtaining information
about or related to a domain name registration record. VeriSign does not
guarantee its accuracy. By submitting a Whois query, you agree to abide
by the following terms of use: You agree that you may use this Data only
for lawful purposes and that under no circumstances will you use this Data
to: (1) allow, enable, or otherwise support the transmission of mass
unsolicited, commercial advertising or solicitations via e-mail, telephone,
or facsimile; or (2) enable high volume, automated, electronic processes
that apply to VeriSign (or its computer systems). The compilation,
repackaging, dissemination or other use of this Data is expressly
prohibited without the prior written consent of VeriSign. You agree not to
use electronic processes that are automated and high-volume to access or
query the Whois database except as reasonably necessary to register
domain names or modify existing registrations. VeriSign reserves the right
to restrict your access to the Whois database in its sole discretion to ensure
operational stability. VeriSign may restrict or terminate your access to the
Whois database for failure to abide by these terms of use. VeriSign
reserves the right to modify these terms at any time.

The Registry database contains ONLY .COM, .NET, .EDU domains and
Registrars.

  REGISTRAR PSI-USA, Inc. dba Domain Robot

SERVERS

  SERVER net.whois-servers.net

  ARGS domain =peschla.net

  PORT 43

  TYPE domain

DOMAIN

  NAME peschla.net

  CHANGED 2017-06-29

  CREATED 2008-04-16

STATUS
clientTransferProhibited https://icann.org/epp#clientTransferProhibited

NSERVER

  NSA1.SCHLUNDTECH.DE 62.116.159.11

  NSB1.SCHLUNDTECH.DE 83.169.55.11

  NSC1.SCHLUNDTECH.DE 89.146.248.21

  NSD1.SCHLUNDTECH.DE 74.208.254.21

  REGISTERED yes

Go to top

Mistakes


The following list shows you to spelling mistakes possible of the internet users for the website searched .

  • www.upeschla.com
  • www.7peschla.com
  • www.hpeschla.com
  • www.kpeschla.com
  • www.jpeschla.com
  • www.ipeschla.com
  • www.8peschla.com
  • www.ypeschla.com
  • www.peschlaebc.com
  • www.peschlaebc.com
  • www.peschla3bc.com
  • www.peschlawbc.com
  • www.peschlasbc.com
  • www.peschla#bc.com
  • www.peschladbc.com
  • www.peschlafbc.com
  • www.peschla&bc.com
  • www.peschlarbc.com
  • www.urlw4ebc.com
  • www.peschla4bc.com
  • www.peschlac.com
  • www.peschlabc.com
  • www.peschlavc.com
  • www.peschlavbc.com
  • www.peschlavc.com
  • www.peschla c.com
  • www.peschla bc.com
  • www.peschla c.com
  • www.peschlagc.com
  • www.peschlagbc.com
  • www.peschlagc.com
  • www.peschlajc.com
  • www.peschlajbc.com
  • www.peschlajc.com
  • www.peschlanc.com
  • www.peschlanbc.com
  • www.peschlanc.com
  • www.peschlahc.com
  • www.peschlahbc.com
  • www.peschlahc.com
  • www.peschla.com
  • www.peschlac.com
  • www.peschlax.com
  • www.peschlaxc.com
  • www.peschlax.com
  • www.peschlaf.com
  • www.peschlafc.com
  • www.peschlaf.com
  • www.peschlav.com
  • www.peschlavc.com
  • www.peschlav.com
  • www.peschlad.com
  • www.peschladc.com
  • www.peschlad.com
  • www.peschlacb.com
  • www.peschlacom
  • www.peschla..com
  • www.peschla/com
  • www.peschla/.com
  • www.peschla./com
  • www.peschlancom
  • www.peschlan.com
  • www.peschla.ncom
  • www.peschla;com
  • www.peschla;.com
  • www.peschla.;com
  • www.peschlalcom
  • www.peschlal.com
  • www.peschla.lcom
  • www.peschla com
  • www.peschla .com
  • www.peschla. com
  • www.peschla,com
  • www.peschla,.com
  • www.peschla.,com
  • www.peschlamcom
  • www.peschlam.com
  • www.peschla.mcom
  • www.peschla.ccom
  • www.peschla.om
  • www.peschla.ccom
  • www.peschla.xom
  • www.peschla.xcom
  • www.peschla.cxom
  • www.peschla.fom
  • www.peschla.fcom
  • www.peschla.cfom
  • www.peschla.vom
  • www.peschla.vcom
  • www.peschla.cvom
  • www.peschla.dom
  • www.peschla.dcom
  • www.peschla.cdom
  • www.peschlac.om
  • www.peschla.cm
  • www.peschla.coom
  • www.peschla.cpm
  • www.peschla.cpom
  • www.peschla.copm
  • www.peschla.cim
  • www.peschla.ciom
  • www.peschla.coim
  • www.peschla.ckm
  • www.peschla.ckom
  • www.peschla.cokm
  • www.peschla.clm
  • www.peschla.clom
  • www.peschla.colm
  • www.peschla.c0m
  • www.peschla.c0om
  • www.peschla.co0m
  • www.peschla.c:m
  • www.peschla.c:om
  • www.peschla.co:m
  • www.peschla.c9m
  • www.peschla.c9om
  • www.peschla.co9m
  • www.peschla.ocm
  • www.peschla.co
  • peschla.netm
  • www.peschla.con
  • www.peschla.conm
  • peschla.netn
  • www.peschla.col
  • www.peschla.colm
  • peschla.netl
  • www.peschla.co
  • www.peschla.co m
  • peschla.net
  • www.peschla.cok
  • www.peschla.cokm
  • peschla.netk
  • www.peschla.co,
  • www.peschla.co,m
  • peschla.net,
  • www.peschla.coj
  • www.peschla.cojm
  • peschla.netj
  • www.peschla.cmo
Show All Mistakes Hide All Mistakes