Python: Remove irrelevant files

This commit is contained in:
Taus Brock-Nannestad
2020-11-02 14:08:59 +01:00
parent f84ab2fa99
commit 8147ad4e0b
20 changed files with 0 additions and 831 deletions

View File

@@ -1,61 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
Accessing files using paths constructed from user-controlled data can allow an attacker to access
unexpected resources. This can result in sensitive information being revealed or deleted, or an
attacker being able to influence behavior by modifying unexpected files.
</p>
</overview>
<recommendation>
<p>
Validate user input before using it to construct a file path, either using an off-the-shelf library function
like <code>werkzeug.utils.secure_filename</code>, or by performing custom validation.
</p>
<p>
Ideally, follow these rules:
</p>
<ul>
<li>Do not allow more than a single "." character.</li>
<li>Do not allow directory separators such as "/" or "\" (depending on the file system).</li>
<li>Do not rely on simply replacing problematic sequences such as "../". For example, after
applying this filter to ".../...//", the resulting string would still be "../".</li>
<li>Use an allowlist of known good patterns.</li>
</ul>
</recommendation>
<example>
<p>
In the first example, a file name is read from an HTTP request and then used to access a file.
However, a malicious user could enter a file name that is an absolute path, such as
<code>"/etc/passwd"</code>.
</p>
<p>
In the second example, it appears that the user is restricted to opening a file within the
<code>"user"</code> home directory. However, a malicious user could enter a file name containing
special characters. For example, the string <code>"../../../etc/passwd"</code> will result in the code
reading the file located at <code>"/server/static/images/../../../etc/passwd"</code>, which is the system's
password file. This file would then be sent back to the user, giving them access to all the
system's passwords.
</p>
<p>
In the third example, the path used to access the file system is normalized <em>before</em> being checked against a
known prefix. This ensures that regardless of the user input, the resulting path is safe.
</p>
<sample src="examples/tainted_path.py" />
</example>
<references>
<li>OWASP: <a href="https://owasp.org/www-community/attacks/Path_Traversal">Path Traversal</a>.</li>
<li>npm: <a href="http://werkzeug.pocoo.org/docs/utils/#werkzeug.utils.secure_filename">werkzeug.utils.secure_filename</a>.</li>
</references>
</qhelp>

View File

@@ -1,75 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>Extracting files from a malicious tar archive without validating that the destination file path
is within the destination directory can cause files outside the destination directory to be
overwritten, due to the possible presence of directory traversal elements (<code>..</code>) in
archive paths.</p>
<p>Tar archives contain archive entries representing each file in the archive. These entries
include a file path for the entry, but these file paths are not restricted and may contain
unexpected special elements such as the directory traversal element (<code>..</code>). If these
file paths are used to determine an output file to write the contents of the archive item to, then
the file may be written to an unexpected location. This can result in sensitive information being
revealed or deleted, or an attacker being able to influence behavior by modifying unexpected
files.</p>
<p>For example, if a tar archive contains a file entry <code>..\sneaky-file</code>, and the tar archive
is extracted to the directory <code>c:\output</code>, then naively combining the paths would result
in an output file path of <code>c:\output\..\sneaky-file</code>, which would cause the file to be
written to <code>c:\sneaky-file</code>.</p>
</overview>
<recommendation>
<p>Ensure that output paths constructed from tar archive entries are validated
to prevent writing files to unexpected locations.</p>
<p>The recommended way of writing an output file from a tar archive entry is to check that
<code>".."</code> does not occur in the path.
</p>
</recommendation>
<example>
<p>
In this example an archive is extracted without validating file paths.
If <code>archive.tar</code> contained relative paths (for
instance, if it were created by something like <code>tar -cf archive.tar
../file.txt</code>) then executing this code could write to locations
outside the destination directory.
</p>
<sample src="examples/tarslip_bad.py" />
<p>To fix this vulnerability, we need to check that the path does not
contain any <code>".."</code> elements in it.
</p>
<sample src="examples/tarslip_good.py" />
</example>
<references>
<li>
Snyk:
<a href="https://snyk.io/research/zip-slip-vulnerability">Zip Slip Vulnerability</a>.
</li>
<li>
OWASP:
<a href="https://owasp.org/www-community/attacks/Path_Traversal">Path Traversal</a>.
</li>
<li>
Python Library Reference:
<a href="https://docs.python.org/3/library/tarfile.html#tarfile.TarFile.extract">TarFile.extract</a>.
</li>
<li>
Python Library Reference:
<a href="https://docs.python.org/3/library/tarfile.html#tarfile.TarFile.extractall">TarFile.extractall</a>.
</li>
</references>
</qhelp>

View File

@@ -1,181 +0,0 @@
/**
* @name Arbitrary file write during tarfile extraction
* @description Extracting files from a malicious tar archive without validating that the
* destination file path is within the destination directory can cause files outside
* the destination directory to be overwritten.
* @kind path-problem
* @id py/tarslip
* @problem.severity error
* @precision medium
* @tags security
* external/cwe/cwe-022
*/
import python
import semmle.python.security.Paths
import semmle.python.dataflow.TaintTracking
import semmle.python.security.strings.Basic
/** A TaintKind to represent open tarfile objects. That is, the result of calling `tarfile.open(...)` */
class OpenTarFile extends TaintKind {
OpenTarFile() { this = "tarfile.open" }
override TaintKind getTaintOfMethodResult(string name) {
name = "getmember" and result instanceof TarFileInfo
or
name = "getmembers" and result.(SequenceKind).getItem() instanceof TarFileInfo
}
override ClassValue getType() { result = Value::named("tarfile.TarFile") }
override TaintKind getTaintForIteration() { result instanceof TarFileInfo }
}
/** The source of open tarfile objects. That is, any call to `tarfile.open(...)` */
class TarfileOpen extends TaintSource {
TarfileOpen() {
Value::named("tarfile.open").getACall() = this and
/*
* If argument refers to a string object, then it's a hardcoded path and
* this tarfile is safe.
*/
not this.(CallNode).getAnArg().pointsTo(any(StringValue str)) and
/* Ignore opens within the tarfile module itself */
not this.(ControlFlowNode).getLocation().getFile().getBaseName() = "tarfile.py"
}
override predicate isSourceOf(TaintKind kind) { kind instanceof OpenTarFile }
}
class TarFileInfo extends TaintKind {
TarFileInfo() { this = "tarfile.entry" }
override TaintKind getTaintOfMethodResult(string name) { name = "next" and result = this }
override TaintKind getTaintOfAttribute(string name) {
name = "name" and result instanceof TarFileInfo
}
}
/*
* For efficiency we don't want to track the flow of taint
* around the tarfile module.
*/
class ExcludeTarFilePy extends Sanitizer {
ExcludeTarFilePy() { this = "Tar sanitizer" }
override predicate sanitizingNode(TaintKind taint, ControlFlowNode node) {
node.getLocation().getFile().getBaseName() = "tarfile.py" and
(
taint instanceof OpenTarFile
or
taint instanceof TarFileInfo
or
taint.(SequenceKind).getItem() instanceof TarFileInfo
)
}
}
/* Any call to an extractall method */
class ExtractAllSink extends TaintSink {
CallNode call;
ExtractAllSink() {
this = call.getFunction().(AttrNode).getObject("extractall") and
count(call.getAnArg()) = 0
}
override predicate sinks(TaintKind kind) { kind instanceof OpenTarFile }
}
/* Argument to extract method */
class ExtractSink extends TaintSink {
CallNode call;
ExtractSink() {
call.getFunction().(AttrNode).getName() = "extract" and
this = call.getArg(0)
}
override predicate sinks(TaintKind kind) { kind instanceof TarFileInfo }
}
/* Members argument to extract method */
class ExtractMembersSink extends TaintSink {
CallNode call;
ExtractMembersSink() {
call.getFunction().(AttrNode).getName() = "extractall" and
(this = call.getArg(0) or this = call.getArgByName("members"))
}
override predicate sinks(TaintKind kind) {
kind.(SequenceKind).getItem() instanceof TarFileInfo
or
kind instanceof OpenTarFile
}
}
class TarFileInfoSanitizer extends Sanitizer {
TarFileInfoSanitizer() { this = "TarInfo sanitizer" }
/** The test `if <path_sanitizing_test>:` clears taint on its `false` edge. */
override predicate sanitizingEdge(TaintKind taint, PyEdgeRefinement test) {
taint instanceof TarFileInfo and
clears_taint_on_false_edge(test.getTest(), test.getSense())
}
private predicate clears_taint_on_false_edge(ControlFlowNode test, boolean sense) {
path_sanitizing_test(test) and
sense = false
or
// handle `not` (also nested)
test.(UnaryExprNode).getNode().getOp() instanceof Not and
clears_taint_on_false_edge(test.(UnaryExprNode).getOperand(), sense.booleanNot())
}
}
private predicate path_sanitizing_test(ControlFlowNode test) {
/* Assume that any test with "path" in it is a sanitizer */
test.getAChild+().(AttrNode).getName().matches("%path")
or
test.getAChild+().(NameNode).getId().matches("%path")
}
class TarSlipConfiguration extends TaintTracking::Configuration {
TarSlipConfiguration() { this = "TarSlip configuration" }
override predicate isSource(TaintTracking::Source source) { source instanceof TarfileOpen }
override predicate isSink(TaintTracking::Sink sink) {
sink instanceof ExtractSink or
sink instanceof ExtractAllSink or
sink instanceof ExtractMembersSink
}
override predicate isSanitizer(Sanitizer sanitizer) {
sanitizer instanceof TarFileInfoSanitizer
or
sanitizer instanceof ExcludeTarFilePy
}
override predicate isBarrier(DataFlow::Node node) {
// Avoid flow into the tarfile module
exists(ParameterDefinition def |
node.asVariable().getDefinition() = def
or
node.asCfgNode() = def.getDefiningNode()
|
def.getScope() = Value::named("tarfile.open").(CallableValue).getScope()
or
def.isSelf() and def.getScope().getEnclosingModule().getName() = "tarfile"
)
}
}
from TarSlipConfiguration config, TaintedPathSource src, TaintedPathSink sink
where config.hasFlowPath(src, sink)
select sink.getSink(), src, sink, "Extraction of tarfile from $@", src.getSource(),
"a potentially untrusted source"

View File

@@ -1,36 +0,0 @@
import os.path
urlpatterns = [
# Route to user_picture
url(r'^user-pic1$', user_picture1, name='user-picture1'),
url(r'^user-pic2$', user_picture2, name='user-picture2'),
url(r'^user-pic3$', user_picture3, name='user-picture3')
]
def user_picture1(request):
"""A view that is vulnerable to malicious file access."""
filename = request.GET.get('p')
# BAD: This could read any file on the file system
data = open(filename, 'rb').read()
return HttpResponse(data)
def user_picture2(request):
"""A view that is vulnerable to malicious file access."""
base_path = '/server/static/images'
filename = request.GET.get('p')
# BAD: This could still read any file on the file system
data = open(os.path.join(base_path, filename), 'rb').read()
return HttpResponse(data)
def user_picture3(request):
"""A view that is not vulnerable to malicious file access."""
base_path = '/server/static/images'
filename = request.GET.get('p')
#GOOD -- Verify with normalised version of path
fullpath = os.path.normpath(os.path.join(base_path, filename))
if not fullpath.startswith(base_path):
raise SecurityException()
data = open(fullpath, 'rb').read()
return HttpResponse(data)

View File

@@ -1,7 +0,0 @@
import tarfile
with tarfile.open('archive.zip') as tar:
#BAD : This could write any file on the filesystem.
for entry in tar:
tar.extract(entry, "/tmp/unpack/")

View File

@@ -1,10 +0,0 @@
import tarfile
import os.path
with tarfile.open('archive.zip') as tar:
for entry in tar:
#GOOD: Check that entry is safe
if os.path.isabs(entry.name) or ".." in entry.name:
raise ValueError("Illegal tar archive entry")
tar.extract(entry, "/tmp/unpack/")

View File

@@ -1,41 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>Code that passes user input directly to
<code>exec</code>, <code>eval</code>, or some other library
routine that executes a command, allows the user to execute malicious
code.</p>
</overview>
<recommendation>
<p>If possible, use hard-coded string literals to specify the command to run
or the library to load. Instead of passing the user input directly to the
process or library function, examine the user input and then choose
among hard-coded string literals.</p>
<p>If the applicable libraries or commands cannot be determined at
compile time, then add code to verify that the user input string is
safe before using it.</p>
</recommendation>
<example>
<p>The following example shows two functions. The first is unsafe as it takes a shell script that can be changed
by a user, and passes it straight to <code>subprocess.call()</code> without examining it first.
The second is safe as it selects the command from a predefined allowlist.</p>
<sample src="examples/command_injection.py" />
</example>
<references>
<li>
OWASP:
<a href="https://www.owasp.org/index.php/Command_Injection">Command Injection</a>.
</li>
</references>
</qhelp>

View File

@@ -1,23 +0,0 @@
urlpatterns = [
# Route to command_execution
url(r'^command-ex1$', command_execution_unsafe, name='command-execution-unsafe'),
url(r'^command-ex2$', command_execution_safe, name='command-execution-safe')
]
COMMANDS = {
"list" :"ls",
"stat" : "stat"
}
def command_execution_unsafe(request):
if request.method == 'POST':
action = request.POST.get('action', '')
#BAD -- No sanitizing of input
subprocess.call(["application", action])
def command_execution_safe(request):
if request.method == 'POST':
action = request.POST.get('action', '')
#GOOD -- Use an allowlist
subprocess.call(["application", COMMANDS[action]])

View File

@@ -1,44 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
Cross-site scripting (XSS) attacks can occur if untrusted input is not escaped. This applies to templates as well as code.
The <code>jinja2</code> templates may be vulnerable to XSS if the environment has <code>autoescape</code> set to <code>False</code>.
Unfortunately, <code>jinja2</code> sets <code>autoescape</code> to <code>False</code> by default.
Explicitly setting <code>autoescape</code> to <code>True</code> when creating an <code>Environment</code> object will prevent this.
</p>
</overview>
<recommendation>
<p>
Avoid setting jinja2 autoescape to False.
Jinja2 provides the function <code>select_autoescape</code> to make sure that the correct auto-escaping is chosen.
For example, it can be used when creating an environment <code>Environment(autoescape=select_autoescape(['html', 'xml'])</code>
</p>
</recommendation>
<example>
<p>
The following example is a minimal Flask app which shows a safe and an unsafe way to render the given name back to the page.
The first view is unsafe as <code>first_name</code> is not escaped, leaving the page vulnerable to cross-site scripting attacks.
The second view is safe as <code>first_name</code> is escaped, so it is not vulnerable to cross-site scripting attacks.
</p>
<sample src="examples/jinja2.py" />
</example>
<references>
<li>
Jinja2: <a href="http://jinja.pocoo.org/docs/2.10/api/">API</a>.
</li>
<li>
Wikipedia: <a href="http://en.wikipedia.org/wiki/Cross-site_scripting">Cross-site scripting</a>.
</li>
<li>
OWASP: <a href="https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html">XSS (Cross Site Scripting) Prevention Cheat Sheet</a>.
</li>
</references>
</qhelp>

View File

@@ -1,48 +0,0 @@
/**
* @name Jinja2 templating with autoescape=False
* @description Using jinja2 templates with 'autoescape=False' can
* cause a cross-site scripting vulnerability.
* @kind problem
* @problem.severity error
* @precision medium
* @id py/jinja2/autoescape-false
* @tags security
* external/cwe/cwe-079
*/
import python
/*
* Jinja 2 Docs:
* https://jinja.palletsprojects.com/en/2.11.x/api/#jinja2.Environment
* https://jinja.palletsprojects.com/en/2.11.x/api/#jinja2.Template
*
* Although the docs doesn't say very clearly, autoescape is a valid argument when constructing
* a Template manually
*
* unsafe_tmpl = Template('Hello {{ name }}!')
* safe1_tmpl = Template('Hello {{ name }}!', autoescape=True)
*/
ClassValue jinja2EnvironmentOrTemplate() {
result = Value::named("jinja2.Environment")
or
result = Value::named("jinja2.Template")
}
ControlFlowNode getAutoEscapeParameter(CallNode call) { result = call.getArgByName("autoescape") }
from CallNode call
where
call.getFunction().pointsTo(jinja2EnvironmentOrTemplate()) and
not exists(call.getNode().getStarargs()) and
not exists(call.getNode().getKwargs()) and
(
not exists(getAutoEscapeParameter(call))
or
exists(Value isFalse |
getAutoEscapeParameter(call).pointsTo(isFalse) and
isFalse.getDefiniteBooleanValue() = false
)
)
select call, "Using jinja2 templates with autoescape=False can potentially allow XSS attacks."

View File

@@ -1,45 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
Directly writing user input (for example, an HTTP request parameter) to a webpage
without properly sanitizing the input first, allows for a cross-site scripting vulnerability.
</p>
</overview>
<recommendation>
<p>
To guard against cross-site scripting, consider escaping the input before writing user input to the page.
The standard library provides escaping functions: <code>html.escape()</code> for Python 3.2 upwards
or <code>cgi.escape()</code> older versions of Python.
Most frameworks also provide their own escaping functions, for example <code>flask.escape()</code>.
</p>
</recommendation>
<example>
<p>
The following example is a minimal flask app which shows a safe and unsafe way to render the given name back to the page.
The first view is unsafe as <code>first_name</code> is not escaped, leaving the page vulnerable to cross-site scripting attacks.
The second view is safe as <code>first_name</code> is escaped, so it is not vulnerable to cross-site scripting attacks.
</p>
<sample src="examples/xss.py" />
</example>
<references>
<li>
OWASP:
<a href="https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html">XSS
(Cross Site Scripting) Prevention Cheat Sheet</a>.
</li>
<li>
Wikipedia: <a href="http://en.wikipedia.org/wiki/Cross-site_scripting">Cross-site scripting</a>.
</li>
<li>
Python Library Reference:
<a href="https://docs.python.org/3/library/html.html#html.escape">html.escape()</a>.
</li>
</references>
</qhelp>

View File

@@ -1,27 +0,0 @@
from flask import Flask, request, make_response, escape
from jinja2 import Environment, select_autoescape, FileSystemLoader
app = Flask(__name__)
loader = FileSystemLoader( searchpath="templates/" )
unsafe_env = Environment(loader=loader)
safe1_env = Environment(loader=loader, autoescape=True)
safe2_env = Environment(loader=loader, autoescape=select_autoescape())
def render_response_from_env(env):
name = request.args.get('name', '')
template = env.get_template('template.html')
return make_response(template.render(name=name))
@app.route('/unsafe')
def unsafe():
return render_response_from_env(unsafe_env)
@app.route('/safe1')
def safe1():
return render_response_from_env(safe1_env)
@app.route('/safe2')
def safe2():
return render_response_from_env(safe2_env)

View File

@@ -1,13 +0,0 @@
from flask import Flask, request, make_response, escape
app = Flask(__name__)
@app.route('/unsafe')
def unsafe():
first_name = request.args.get('name', '')
return make_response("Your name is " + first_name)
@app.route('/safe')
def safe():
first_name = request.args.get('name', '')
return make_response("Your name is " + escape(first_name))

View File

@@ -1,56 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
If a database query (such as a SQL or NoSQL query) is built from
user-provided data without sufficient sanitization, a user
may be able to run malicious database queries.
</p>
</overview>
<recommendation>
<p>
Most database connector libraries offer a way of safely
embedding untrusted data into a query by means of query parameters
or prepared statements.
</p>
</recommendation>
<example>
<p>
In the following snippet, a user is fetched from the database using three
different queries.
</p>
<p>
In the first case, the query string is built by
directly using string formatting from a user-supplied request parameter.
The parameter may include quote characters, so this
code is vulnerable to a SQL injection attack.
</p>
<p>
In the second case, the user-supplied request attribute is passed
to the database using query parameters. The database connector library will
take care of escaping and inserting quotes as needed.
</p>
<p>
In the third case, the placeholder in the SQL string has been manually quoted. Since most
databaseconnector libraries will insert their own quotes, doing so yourself will make the code
vulnerable to SQL injection attacks. In this example, if <code>username</code> was
<code>; DROP ALL TABLES -- </code>, the final SQL query would be
<code>SELECT * FROM users WHERE username = ''; DROP ALL TABLES -- ''</code>
</p>
<sample src="examples/sql_injection.py" />
</example>
<references>
<li>Wikipedia: <a href="https://en.wikipedia.org/wiki/SQL_injection">SQL injection</a>.</li>
<li>OWASP: <a href="https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html">SQL Injection Prevention Cheat Sheet</a>.</li>
</references>
</qhelp>

View File

@@ -1,19 +0,0 @@
from django.conf.urls import url
from django.db import connection
def show_user(request, username):
with connection.cursor() as cursor:
# BAD -- Using string formatting
cursor.execute("SELECT * FROM users WHERE username = '%s'" % username)
user = cursor.fetchone()
# GOOD -- Using parameters
cursor.execute("SELECT * FROM users WHERE username = %s", username)
user = cursor.fetchone()
# BAD -- Manually quoting placeholder (%s)
cursor.execute("SELECT * FROM users WHERE username = '%s'", username)
user = cursor.fetchone()
urlpatterns = [url(r'^users/(?P<username>[^/]+)$', show_user)]

View File

@@ -1,46 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
Directly evaluating user input (for example, an HTTP request parameter) as code without properly
sanitizing the input first allows an attacker arbitrary code execution. This can occur when user
input is passed to code that interprets it as an expression to be
evaluated, such as <code>eval</code> or <code>exec</code>.
</p>
</overview>
<recommendation>
<p>
Avoid including user input in any expression that may be dynamically evaluated. If user input must
be included, use context-specific escaping before including it.
It is important that the correct escaping is used for the type of evaluation that will occur.
</p>
</recommendation>
<example>
<p>
The following example shows two functions setting a name from a request.
The first function uses <code>exec</code> to execute the <code>setname</code> function.
This is dangerous as it can allow a malicious user to execute arbitrary code on the server.
For example, the user could supply the value <code>"' + subprocess.call('rm -rf') + '"</code>
to destroy the server's file system.
The second function calls the <code>setname</code> function directly and is thus safe.
</p>
<sample src="examples/code_injection.py" />
</example>
<references>
<li>
OWASP:
<a href="https://www.owasp.org/index.php/Code_Injection">Code Injection</a>.
</li>
<li>
Wikipedia: <a href="https://en.wikipedia.org/wiki/Code_injection">Code Injection</a>.
</li>
</references>
</qhelp>

View File

@@ -1,18 +0,0 @@
urlpatterns = [
# Route to code_execution
url(r'^code-ex1$', code_execution_bad, name='code-execution-bad'),
url(r'^code-ex2$', code_execution_good, name='code-execution-good')
]
def code_execution(request):
if request.method == 'POST':
first_name = base64.decodestring(request.POST.get('first_name', ''))
#BAD -- Allow user to define code to be run.
exec("setname('%s')" % first_name)
def code_execution(request):
if request.method == 'POST':
first_name = base64.decodestring(request.POST.get('first_name', ''))
#GOOD --Call code directly
setname(first_name)

View File

@@ -1,10 +0,0 @@
from django.conf.urls import url
import json
def safe(pickled):
return json.loads(pickled)
urlpatterns = [
url(r'^(?P<object>.*)$', safe)
]

View File

@@ -1,10 +0,0 @@
from django.conf.urls import url
import pickle
def unsafe(pickled):
return pickle.loads(pickled)
urlpatterns = [
url(r'^(?P<object>.*)$', unsafe)
]

View File

@@ -1,61 +0,0 @@
<!DOCTYPE qhelp PUBLIC "-//Semmle//qhelp//EN" "qhelp.dtd">
<qhelp>
<overview>
<p>
Deserializing untrusted data using any deserialization framework that
allows the construction of arbitrary serializable objects is easily exploitable
and in many cases allows an attacker to execute arbitrary code. Even before a
deserialized object is returned to the caller of a deserialization method a lot
of code may have been executed, including static initializers, constructors,
and finalizers. Automatic deserialization of fields means that an attacker may
craft a nested combination of objects on which the executed initialization code
may have unforeseen effects, such as the execution of arbitrary code.
</p>
<p>
There are many different serialization frameworks. This query currently
supports Pickle, Marshal and Yaml.
</p>
</overview>
<recommendation>
<p>
Avoid deserialization of untrusted data if at all possible. If the
architecture permits it then use other formats instead of serialized objects,
for example JSON.
</p>
</recommendation>
<example>
<p>
The following example calls <code>pickle.loads</code> directly on a
value provided by an incoming HTTP request. Pickle then creates a new value from untrusted data, and is
therefore inherently unsafe.
</p>
<sample src="UnpicklingBad.py" />
<p>
Changing the code to use <code>json.loads</code> instead of <code>pickle.loads</code> removes the vulnerability.
</p>
<sample src="JsonGood.py" />
</example>
<references>
<li>
OWASP vulnerability description:
<a href="https://www.owasp.org/index.php/Deserialization_of_untrusted_data">Deserialization of untrusted data</a>.
</li>
<li>
OWASP guidance on deserializing objects:
<a href="https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html">Deserialization Cheat Sheet</a>.
</li>
<li>
Talks by Chris Frohoff &amp; Gabriel Lawrence:
<a href="http://frohoff.github.io/appseccali-marshalling-pickles/">
AppSecCali 2015: Marshalling Pickles - how deserializing objects will ruin your day</a>
</li>
</references>
</qhelp>