karlis
January 4, 2017, 11:33am
6
Happens to me as well. Fastlane is up to date (version 2.4.0), using official plugin (GitHub - getsentry/sentry-fastlane-plugin: Official fastlane plugin for Sentry ).
[03:13:41]: Will upload dSYM(s) to https://app.getsentry.com/api/0/projects/xxx/xxx/files/dsyms/
[03:13:41]: Uploading... /xxx.dSYM.zip
[03:14:46]: Error: 413 Request Entity Too Large
Organization and project slugs are ok, as well as auth token, because when we repeat the upload with the same dsym file sometimes it works. Uploaded dsym size is 25MB.
There’s nothing to wonder at, error “413 Request Entity Too Large” is triggered by nginx when uploaded file size is greater than nginx’s “client_max_body_size” setting. It seems that some of your nginx instances is improperly configured.
Looks like this error has happened before:
I ran upload-sourcemaps with the sentry CLI. This usually works, but for my newest release, I am seeing:
Running with sourcemap validation
...
All Good!
Uploading sourcemaps for release af84b7
bundle.js.map -> https://..../bundle.js.map
error: could not perform API request: http error: generic error (413)
opened 01:59AM - 29 Apr 15 UTC
closed 03:44AM - 03 Feb 18 UTC
Bug
Critical
Breadcrumbs
Recently started getting the follow error, without any explicit change to Raven … or Sentry configuration. Google results are unrevealing. Is there something known that can cause this?
Any proper ways of logging more fully the original `AttributeError`, which caused the Sentry error, besides disabling Sentry?
``` bash
INFO:werkzeug:172.17.42.1 - - [29/Apr/2015 01:02:51] "POST / HTTP/1.1" 500 -
ERROR:sentry.errors:Unable to reach Sentry log server: HTTP Error 413: Request Entity Too Large (url: https://app.getsentry.com/api/39671/store/, body: b'<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body bgcolor="white">\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n')
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/raven/transport/threaded.py", line 159, in send_sync
super(ThreadedHTTPTransport, self).send(data, headers)
File "/usr/local/lib/python3.4/site-packages/raven/transport/http.py", line 49, in send
ca_certs=self.ca_certs,
File "/usr/local/lib/python3.4/site-packages/raven/utils/http.py", line 62, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.4/urllib/request.py", line 461, in open
response = meth(req, response)
File "/usr/local/lib/python3.4/urllib/request.py", line 571, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python3.4/urllib/request.py", line 499, in error
return self._call_chain(*args)
File "/usr/local/lib/python3.4/urllib/request.py", line 433, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.4/urllib/request.py", line 579, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 413: Request Entity Too Large
ERROR:sentry.errors:Failed to submit message: "AttributeError: 'dict' object has no attribute 'encode'"
```
We are paying for this service, it’s really great, but we have 4 projects depending on this functionality and currently it breaks our continuous deployment process.
Thank you!