发布时间:2025-10-25 14:55:27 来源:奋发图强网 作者:综合
典雅地找寻中文网站源码(一)0x0 序言渗入操作过程中如果能以获取到中文网站的源码,所以毫无疑问迈入了天主视点尽管以后出现过许多透过搜寻引擎搜寻同行业中文网站,网站文网接着大批量扫存储的源代路子,但是码天却没法撷取其具体内容操作过程,这儿本栏便重新整理了自己合作开发分布式系统扫描器仪的点找代码断蔬产品目录扫描器组件的许多试著的路子,同时撷取许多找寻源码的寻中其它方式,期望能给听众增添许多捷伊新体验。不间
0x1 搜寻基本功0x1.1 标识符代销网络平台欧美国家的github和亚洲地区的gitee都是服务器端标识符代销网络平台,透过许多搜寻基本功,网站文网我们能由此辨认出许多外泄的源代脆弱重要信息,其中就包括许多流程的码天源码这儿本栏对码云平常用得不多,故为此而已单纯喔,点找代码断蔬上面,寻中则重点项目如是不间说github的用语:。
自学这个用语就我对个人来说最小的益处是,碰到回到大批统计数据的这时候,能依照许多特征来过滤器掉许多废弃物统计数据Github的搜寻网页:https://github.com/search(1) quick cheat sheet。
此基础查阅:
搜寻库房:
搜寻标识符:
搜寻用户:
(2)对个人查阅Dorkfilename:config.phpdbpasswdfilename:.bashrcpasswordshodan\_api\_keylanguage:pythonpath:sites
databasespassword"baidu.com" sshlanguage:yamlfilename:file.phpadminin:pathorg:companyname "AWS\_ACCESS
\_KEY_ID:"(3)针对某个关键词查阅用双引号括起来,如"qq.com"(4)能使用GitDorker来自定义dork,实现自动化查阅git clone https://github.com/obheda12/GitDorker.gitcd GitDorkerdocker build -t gitdorker .docker run -it gitdorkerdocker run -it -v $(pwd)/tf:/tf gitdorker -tf tf/TOKENSFILE -q tesla.com -d dorks/DORKFILE -o tesladocker run -it -v $(pwd)/tf:/tf xshuden/gitdorker -tf tf/TOKENSFILE -q tesla.com -d dorks/DORKFILE -o tesla 。
免安装使用:python3 GitDorker.py -tf ./TF/TOKENSFILE -q ximalaya.com -d ./Dorks/alldorksv3 -o x mly
参考:https://github.com/techgaun/github-dorkshttps://infosecwriteups.com/github-dork-553b7b84bcf40x1.2 搜寻引擎
Google:XX源码XX完整包xx安装流程xx存储xx标识符xx开源xx源流程xx框架xxext:rar | ext:tar.gz |ext:zip
0x1.3 网盘搜寻https://www.feifeipan.com/https://www.dalipan.com/https://www.chaonengsou.com/ 这个中文网站做了个集合,比较全。
0x2 曲线路子如果如0x1所述,依然没办法找到源码,说明目标系统是那种小众或者商业类型的,导致没有在互联网流传广泛,故没办法搜寻到这个这时候,我们便能采用曲线路子,透过找寻本中文网站根产品目录下的存储文件,源码包进行下载,如果仍然没有找到,则去找寻同套系统的其它中文网站,扫描器这些中文网站产品目录下的存储文件和源码包,从而以获取到系统源码。
我们不能做思想上的巨人,行动上的矮子,所以如何高效地完成这一操作过程呢? 能划分为上面几个步骤来完成0x2.1 提取特征关于特征,重点项目收集主页特征,即直接访问域名显示的网页,因为主页是最容易被搜寻引擎爬虫爬到的,次之,则是收集主页可访问到的其它标志性网页特征。
(1) logo 特征请求favicon.ico以获取hash
(2) 关键词特征中文网站title、中文网站版权重要信息、j avas cript关键字重要信息、html源码结构重要信息、http回到头特征0x2.2 资产收集关于资产收集,除了调度自己写的脚本集成fofa,shodan,zoomeye三个网络平台之外,我还很喜欢使用一个工具,因为它的功能比较丰富且运行也较为稳定——-fofaviewer。
下载地址:https://github.com/wgpsec/fofa_viewer
0x2.3 单纯fuzz收集到资产之后,前期,我喜欢用httpx进行许多路径的单纯探测cat targets.xt|deduplicate|httpx -path /wwwroot.zip -status-code
相当于做一层单纯的过滤器,来帮助nuclei减少请求的量0x2.4 编写nuclei插件阅读和自学编写插件的官方文档:Guide可知:编写插件第一步: 插件重要信息新建back-up-files.yaml文件,写入如下内容
参考:https://nuclei.projectdiscovery.io/templating-guide/#template-detail 可知id是必须的,不能包含空格,一般与文件名相同info区域是动态的,除了name, author, des cription, severity and tags,也能添加其它
key:value,tags是支持用于nuclei检索调用的,可参照同行业插件来写id: back-up-filesinfo: name: Find Resource Code Of Target Template 。
author: xq17 severity: medium tags: exposure,backup 编写插件的第二步:发送请求参考:https://nuclei.projectdiscovery.io/templating-guide/protocols/http/ 可知
1.HTTP Requests start with a request block which specifies the start of the requests for the template.
2.Request method can be GET, POST, PUT, DELETE, etc depending on the needs.3.Redirection conditions can be specified per each template. By default, redirects are not followed. However, if desired, they can be enabled with
redirects: true in request d etails.4.The next part of the requests is the path of the request path. Dynamic variables can be placed in the path to modify its behavior on runtime.
Variables start with { { and end with }} and are case-sensitive.{ { b aseURL}} - This will replace on runtime in the request by the original URL as specified in the target file.
{ { Hostname}} - Hostname variable is replaced by the hostname of the target on runtime.5.Headers can also be specified to be sent along with the requests. Headers are placed in form of key/value pairs. An example header configuration looks like this:
# headers contains the headers for the requestheaders: # Custom user-agent header User-Agent: Some-Random-User-Agent # Custom request origin Origin: https://google.com
6.Body specifies a body to be sent along with the request. (发送POST包需要用到)7.To maintain cookie b ased browser like session between multiple requests, you can simply use
cookie-reuse: true in your template, Useful in cases where you want to maintain session between series of request to complete the exploit chain and to perform authenticated scans.(Session重用,作用是串联攻击链,实现登录验证再攻击)
# cookie-reuse accepts boolean input and falseas defaultcookie-reuse: true8.Request condition allows to check for condition between multiple requests for writing complex checks and exploits involving multiple HTTP request to complete the exploit chain.
with DSL matcher, it can be utilized by adding req-condition: true and numbers as suffix with respective attributes,
status_code_1, status_code_3, andbody_2 for example.(编写复杂攻击链)req-condition: true matchers: - type: dsl
dsl: - "status\_code\_1 == 404 && status\_code\_2 == 200 && contains((body\_2), secret\_string)"…还有许多高级用语比如支持raw http,race之类的,但是这儿用不上,文档这个东西,够用就行。
requests: - method: GET path: - "{ { b aseURL}}/wwwroot.zip" - "{ { b aseURL}}/www.zip"编写插件的第三步: 判断回到内容
参考:https://nuclei.projectdiscovery.io/templating-guide/operators/matchers/ 知Multiple matchers can be specified in a request. There are basically 6 types of matchers:
status(状态码) size(回到包大小) word(字符串) regex(正则匹配) binary(二进制文件)还有一个dsl,高度自定义验证回到内容,能对回到内容做许多操作(这儿暂时用不上)可用的辅助函数: https://nuclei.projectdiscovery.io/templating-guide/helper-functions/,
对于words and regexes,能对回到内容的多个匹配条件用AND或OR进行组合Multiple words and regexes can be specified in a single matcher and can be configured with different conditions like AND and OR。
能对回到的包,选定match的区域,默认是body,也支持选择header等任意地方Multiple parts of the response can also be matched for the request, default matched part is 。
body if not defined.支持对条件取反,这个就是反证法的妙处了All types of matchers also support negative conditions, mostly useful when you look for a match with an exclusions. This can be used by adding 。
negative: true in the matchers block.支持使用多个matchersMultiple matchers can be used in a single template to fingerprint multiple conditions with a single request.
支持matchers-conditionWhile using multiple matchers the default condition is to follow OR operation in between all the matchers, AND operation can be used to make sure return the result if all matchers returns true.
结合上面文档的如是说,能写入如下的判断matchers-condition: and matchers: - type: binary binary: - 。
"504B0304" # zip part: body - type: dsl dsl: - "len(body)>0" - type
: status status: - 200编写插件的第四步: 链接起各个部分上面的标识符内容按顺序链接起来,则是如下:id: back-up-filesinfo: name
: Find Resource Code Of Target Template author: xq17 severity: medium tags: exposure,backuprequests
: - method: GET path: - "{ { b aseURL}}/wwwroot.zip" - "{ { b aseURL}}/www.zip"matchers-condition
: and matchers: - type: binary binary: - "504B0304" # zip part: body -
type: dsl dsl: - "len(body)>0" - type: status status: - 2000x2.5 测试插件
本地起一个靶机,进行调试:python3-mhttp.server 9091
接着调试:echohttp://127.0.0.1:9091 | nuclei -t back-up-files.yaml -debug -timeout 2 -stats -proxy-url http://127.0.0.1:8080/
发包操作过程:
能看到nuclei应用上插件之后,能快速Fuzz出中文网站存储文件0x3 总结第一篇主要是如是说了许多路子和nuclei插件编写单纯路子,用于帮助新手快速入门,第二篇则是关于如何增强该插件,增加扫描器产品目录列表,更精确的判断回到值等内容(这儿建议听众,能先自行阅读下nuclei-template的文档,这样自学效果更佳!),第三篇则是运用前两篇的知识点和增强型插件,来完成一次真实地找寻中文网站源码之旅。
相关文章
随便看看