你好老乡程序员!
我试图写一个脚本使用python和机械化模块登录到我的大学“粮食平衡”网页...
这是我试图登录到网页: http://www.wcu.edu/11407.asp该网站有以下形式登录:
<FORM method=post action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx><INPUT value=https://cf.wcu.edu/busafrs/catcard/idsearch.cfm type=hidden name=wcuirs_uri>
<P><B>WCU ID Number<BR></B><INPUT maxLength=12 size=12 type=password name=id> </P>
<P><B>PIN<BR></B><INPUT maxLength=20 type=password name=PIN> </P>
<P></P>
<P><INPUT value="Request Access" type=submit name=submit> </P></FORM>
由此我们知道,我需要填写以下字段:1.名称= ID 2名= PIN
随着动作:动作= HTTPS://itapp.wcu.edu/BanAuthRedirector/Default.aspx
这是迄今为止我已经写了脚本:
#!/usr/bin/python2 -W ignore
import mechanize, cookielib
from time import sleep
url = 'http://www.wcu.edu/11407.asp'
myId = '11111111111'
myPin = '22222222222'
# Browser
#br = mechanize.Browser()
#br = mechanize.Browser(factory=mechanize.DefaultFactory(i_want_broken_xhtml_support=True))
br = mechanize.Browser(factory=mechanize.RobustFactory()) # Use this because of bad html tags in the html...
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
# User-Agent (fake agent to google-chrome linux x86_64)
br.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'),
('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),
('Accept-Encoding', 'gzip,deflate,sdch'),
('Accept-Language', 'en-US,en;q=0.8'),
('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')]
# The site we will navigate into
br.open(url)
# Go though all the forms (for debugging only)
for f in br.forms():
print f
# Select the first (index two) form
br.select_form(nr=2)
# User credentials
br.form['id'] = myId
br.form['PIN'] = myPin
br.form.action = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx'
# Login
br.submit()
# Wait 10 seconds
sleep(10)
# Save to a file
f = file('mycatpage.html', 'w')
f.write(br.response().read())
f.close()
现在的问题...
对于一些奇怪的原因的页面,我回去(在mycatpage.html)是登录页面,而不是预期的页面显示我的“猫的现金余额”和“块用餐次数”左...
有没有人有任何想法,为什么? 请记住,一切以头文件正确,而ID和通行证是不是真的111111111和222222222,正确的价值观都与网站工作(使用浏览器...)
提前致谢
编辑
另一个脚本我尝试:
from urllib import urlopen, urlencode
import urllib2
import httplib
url = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx'
myId = 'xxxxxxxx'
myPin = 'xxxxxxxx'
data = {
'id':myId,
'PIN':myPin,
'submit':'Request Access',
'wcuirs_uri':'https://cf.wcu.edu/busafrs/catcard/idsearch.cfm'
}
opener = urllib2.build_opener()
opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'),
('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),
('Accept-Encoding', 'gzip,deflate,sdch'),
('Accept-Language', 'en-US,en;q=0.8'),
('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')]
request = urllib2.Request(url, urlencode(data))
open("mycatpage.html", 'w').write(opener.open(request))
这具有相同的行为...