爬虫程序怎么检测ip伪装成功
最近想爬取https://jobs.51job.com/chengdu-gxq/131868888.html?s=sou_sou_soulb&t=0_0这个网址中的一些信息,比如获取工作的名称,但是该网页为动态网页,之前用selenium爬取返回为入乱七八糟的网址。所以我今天就从新的修改了下程序,然后再进行数据的获取,结果是成功的。
但是我们都知道招聘类的网站反爬也是很严的,尤其是针对ip。但是有时候我们的爬虫程序添加了代理,可我们不知道程序是否获取到了ip,尤其是动态转发模式的,这时候就需要进行检测了,以下是一种代理是否伪装成功的检测方式,大家可以参考下。示例来源于https://www.16yun.cn/help/ss_demo/#8selenium
from selenium import webdriver import string import zipfile # 代理服务器(产品官网 www.16yun.cn) proxyHost = "t.16yun.cn" proxyPort = "31111" # 代理验证信息 proxyUser = "16EACILI" proxyPass = "676780" def create_proxy_auth_extension(proxy_host, proxy_port, proxy_username, proxy_password, scheme='http', plugin_path=None): if plugin_path is None: plugin_path = r'D:/{}_{}@t.16yun.zip'.format(proxy_username, proxy_password) manifest_json = """ { "version": "1.0.0", "manifest_version": 2, "name": "16YUN Proxy", "permissions": [ "proxy", "tabs", "unlimitedStorage", "storage", "", "webRequest", "webRequestBlocking" ], "background": { "scripts": ["background.js"] }, "minimum_chrome_version":"22.0.0" } """ background_js = string.Template( """ var config = { mode: "fixed_servers", rules: { singleProxy: { scheme: "${scheme}", host: "${host}", port: parseInt(${port}) }, bypassList: ["foobar.com"] } }; chrome.proxy.settings.set({value: config, scope: "regular"}, function() {}); function callbackFn(details) { return { authCredentials: { username: "${username}", password: "${password}" } }; } chrome.webRequest.onAuthRequired.addListener( callbackFn, {urls: [""]}, ['blocking'] ); """ ).substitute( host=proxy_host, port=proxy_port, username=proxy_username, password=proxy_password, scheme=scheme, ) with zipfile.ZipFile(plugin_path, 'w') as zp: zp.writestr("manifest.json", manifest_json) zp.writestr("background.js", background_js) return plugin_path proxy_auth_plugin_path = create_proxy_auth_extension( proxy_host=proxyHost, proxy_port=proxyPort, proxy_username=proxyUser, proxy_password=proxyPass) option = webdriver.ChromeOptions() option.add_argument("--start-maximized") # 如报错 chrome-extensions # option.add_argument("--disable-extensions") option.add_extension(proxy_auth_plugin_path) # 关闭webdriver的一些标志 # option.add_experimental_option('excludeSwitches', ['enable-automation']) driver = webdriver.Chrome(chrome_options=option) # 修改webdriver get属性 # script = ''' # Object.defineProperty(navigator, 'webdriver', { # get: () => undefined # }) # ''' # driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", {"source": script}) driver.get("https://jobs.51job.com/chengdu-gxq/131868888.html?s=sou_sou_soulb&t=0_0") python获取Cookie HtmlUnitDriver(Java) Firefox(Java) php-webdriver-chrome
我们可以通过访问http://httpbin.org/ip网站获取ip,然后后再访问www.ip138.com就知道是否获取到了ip。
爬虫程序怎么检测ip伪装成功
xiaotaomi
会员积分:6520
最近想爬取https://jobs.51job.com/chengdu-gxq/131868888.html?s=sou_sou_soulb&t=0_0这个网址中的一些信息,比如获取工作的名称,但是该网页为动态网页,之前用selenium爬取返回为入乱七八糟的网址。所以我今天就从新的修改了下程序,然后再进行数据的获取,结果是成功的。
但是我们都知道招聘类的网站反爬也是很严的,尤其是针对ip。但是有时候我们的爬虫程序添加了代理,可我们不知道程序是否获取到了ip,尤其是动态转发模式的,这时候就需要进行检测了,以下是一种代理是否伪装成功的检测方式,大家可以参考下。示例来源于https://www.16yun.cn/help/ss_demo/#8selenium
from selenium import webdriver import string import zipfile # 代理服务器(产品官网 www.16yun.cn) proxyHost = "t.16yun.cn" proxyPort = "31111" # 代理验证信息 proxyUser = "16EACILI" proxyPass = "676780" def create_proxy_auth_extension(proxy_host, proxy_port, proxy_username, proxy_password, scheme='http', plugin_path=None): if plugin_path is None: plugin_path = r'D:/{}_{}@t.16yun.zip'.format(proxy_username, proxy_password) manifest_json = """ { "version": "1.0.0", "manifest_version": 2, "name": "16YUN Proxy", "permissions": [ "proxy", "tabs", "unlimitedStorage", "storage", "", "webRequest", "webRequestBlocking" ], "background": { "scripts": ["background.js"] }, "minimum_chrome_version":"22.0.0" } """ background_js = string.Template( """ var config = { mode: "fixed_servers", rules: { singleProxy: { scheme: "${scheme}", host: "${host}", port: parseInt(${port}) }, bypassList: ["foobar.com"] } }; chrome.proxy.settings.set({value: config, scope: "regular"}, function() {}); function callbackFn(details) { return { authCredentials: { username: "${username}", password: "${password}" } }; } chrome.webRequest.onAuthRequired.addListener( callbackFn, {urls: [""]}, ['blocking'] ); """ ).substitute( host=proxy_host, port=proxy_port, username=proxy_username, password=proxy_password, scheme=scheme, ) with zipfile.ZipFile(plugin_path, 'w') as zp: zp.writestr("manifest.json", manifest_json) zp.writestr("background.js", background_js) return plugin_path proxy_auth_plugin_path = create_proxy_auth_extension( proxy_host=proxyHost, proxy_port=proxyPort, proxy_username=proxyUser, proxy_password=proxyPass) option = webdriver.ChromeOptions() option.add_argument("--start-maximized") # 如报错 chrome-extensions # option.add_argument("--disable-extensions") option.add_extension(proxy_auth_plugin_path) # 关闭webdriver的一些标志 # option.add_experimental_option('excludeSwitches', ['enable-automation']) driver = webdriver.Chrome(chrome_options=option) # 修改webdriver get属性 # script = ''' # Object.defineProperty(navigator, 'webdriver', { # get: () => undefined # }) # ''' # driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", {"source": script}) driver.get("https://jobs.51job.com/chengdu-gxq/131868888.html?s=sou_sou_soulb&t=0_0") python获取Cookie HtmlUnitDriver(Java) Firefox(Java) php-webdriver-chrome
我们可以通过访问http://httpbin.org/ip网站获取ip,然后后再访问www.ip138.com就知道是否获取到了ip。
21-08-11 16:41
10580
232
回复
暂无评论