首页 文章

使用python请求和csrf-token登录

提问于
浏览
0

我正在使用python的请求模块尝试登录网页 . 我打开一个requests.session(),然后我得到了一个元组件中包含的cookie和csrf-token . 我使用用户名,密码,隐藏的输入字段和元标记中的csrf-token构建我的有效负载 . 之后,我使用post方法,我通过登录URL,cookie,有效负载和 Headers . 但之后我无法访问登录页面后面的页面 . 我究竟做错了什么?

当我执行登录时,这是请求标头:

Request Headers:

:authority: www.die-staemme.de
:method: POST
:path: /page/auth
:scheme: https
accept: application/json, text/javascript, */*; q=0.01
accept-encoding: gzip, deflate, br
accept-language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
content-length: 50
content-type: application/x-www-form-urlencoded
cookie: cid=261197879; remember_optout=0; ref=start; 
PHPSESSID=3eb4f503f38bfda1c6f48b8f9036574a
origin: https://www.die-staemme.de
referer: https://www.die-staemme.de/
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36
x-csrf-token: 3c49b84153f91578285e0dc4f22491126c3dfecdabfbf144
x-requested-with: XMLHttpRequest

到目前为止这是我的代码:

import requests
from bs4 import BeautifulSoup as bs
import lxml

# Page header
head= { 'Content-Type':'application/x-www-form-urlencoded',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
}
# Start Page
url = 'https://www.die-staemme.de/'
# Login URL
login_url = 'https://www.die-staemme.de/page/auth'
# URL behind the login page
url2= 'https://de159.die-staemme.de/game.php?screen=overview&intro'

# Open up a session
s = requests.session()

# Open the login page
r = s.get(url)

# Get the csrf-token from meta tag
soup = bs(r.text,'lxml')
csrf_token = soup.select_one('meta[name="csrf-token"]')['content']

# Get the page cookie
cookie = r.cookies

# Set CSRF-Token
head['X-CSRF-Token'] = csrf_token
head['X-Requested-With'] = 'XMLHttpRequest'

# Build the login payload
payload = {
'username': '', #<-- your username
'password': '', #<-- your password
'remember':'1' 
}

# Try to login to the page
r = s.post(login_url, cookies=cookie, data=payload, headers=head)

# Try to get a page behind the login page
r = s.get(url2)

# Check if login was successful, if so there have to be an element with the id menu_row2
soup = bs(r.text, 'lxml')
element = soup.select('#menu_row2')
print(element)

1 回答

  • 0

    值得注意的是,在使用Python Requests模块时,您的请求与标准用户请求不完全相同 . 为了完全模仿实际请求,因此不会被站点的任何防火墙或安全措施阻止,您将需要复制所有POST参数,GET参数和最终标头 .

    您可以使用Burp Suite等工具拦截登录请求 . 复制发送它的URL,复制所有POST参数,最后复制所有 Headers . 您应该使用 requests.Session() 函数来存储cookie . 您可能还希望对主页执行初始会话GET请求以获取cookie,因为用户在未首先访问主页的情况下发送登录请求是不现实的 .

    我希望有意义,标头参数可以像这样传递:

    import requests
    
    headers = {
        'User-Agent': 'My User Agent (copy your real one for a realistic request).'
    }
    
    data = {
        'username': 'John',
        'password': 'Doe'
    }
    
    s = requests.Session()
    s.get("https://mywebsite.com/")
    s.post("https://mywebsite.com/", data=data, headers=headers)
    

相关问题