From Fedora Project Wiki

No edit summary
m (just formatting, to make the page more readable)
Line 13: Line 13:


Alternate approach: make the debuginfo sources be downloadable from an internet-visible fileserver, and do the analysis on the user's machine.  The rpms could be unpacked on demand.  The user's computer would merely be downloading small amounts of public information, rather than sending large amounts of private information.  I believe Will Woods was working on something like this.
Alternate approach: make the debuginfo sources be downloadable from an internet-visible fileserver, and do the analysis on the user's machine.  The rpms could be unpacked on demand.  The user's computer would merely be downloading small amounts of public information, rather than sending large amounts of private information.  I believe Will Woods was working on something like this.
 
:--[[User:Mtoman|Mtoman]] 18:30, 16 November 2010 (UTC):
 
:* It is one of the problems. The user has to trust Retrace Server's administrator, that's why only HTTPS communication will be allowed.
--[[User:Mtoman|Mtoman]] 18:30, 16 November 2010 (UTC):
:* At the moment Retrace Server uses xz compression. It is able to compress all the information (including coredump) to a suitable size. :For my testing crashes it always was < 7MB, even for 100MB coredumps from OOo or web browser. Other compression algorithms and no :compression (it is not really needed to compress if the Retrace Server is running locally) will be available in the future.
* It is one of the problems. The user has to trust Retrace Server's administrator, that's why only HTTPS communication will be allowed.
:
* At the moment Retrace Server uses xz compression. It is able to compress all the information (including coredump) to a suitable size. For my testing crashes it always was < 7MB, even for 100MB coredumps from OOo or web browser. Other compression algorithms and no compression (it is not really needed to compress if the Retrace Server is running locally) will be available in the future.
:I guess the alternate project you are talking about is [[Features/DebuginfoFS|DebuginfoFS]] and I agree it would be better in many :cases. The main advantage of Retrace Server is that it archives all the versions of all packages, so you are able to process crashes :even from not fully updated system. That's why we would like to implement both.
 
I guess the alternate project you are talking about is [[Features/DebuginfoFS|DebuginfoFS]] and I agree it would be better in many cases. The main advantage of Retrace Server is that it archives all the versions of all packages, so you are able to process crashes even from not fully updated system. That's why we would like to implement both.


--[[User:Dmalcolm|Dmalcolm]] 22:50, 2 December 2010 (UTC): why would DebuginfoFS not have access to all versions of all packages?  Can't it simply be wired up to Koji's NFS server, and access them on-demand?
--[[User:Dmalcolm|Dmalcolm]] 22:50, 2 December 2010 (UTC): why would DebuginfoFS not have access to all versions of all packages?  Can't it simply be wired up to Koji's NFS server, and access them on-demand?


--[[User:Jcm|Jcm]] 05:50, 13 December 2010 (UTC): I fully agree with David. Doing this on the user's system with a simple export of debuginfofs available over the network sounds like a much easier solution, with less security risk too :)
--[[User:Jcm|Jcm]] 05:50, 13 December 2010 (UTC): I fully agree with David. Doing this on the user's system with a simple export of debuginfofs available over the network sounds like a much easier solution, with less security risk too :)

Revision as of 14:38, 24 January 2011

Wrangler Review 2010-11-04

Please complete the How To Test section--we need some idea how a person would go about testing this feature.

Thank you. poelcat 16:41, 4 November 2010 (UTC)

How To Test looks good. Somehow we are missing the User Experience and Release Notes sections. Please complete them too. This needs to completed before FESCo reviews. Thanks poelcat 17:50, 16 November 2010 (UTC)

Other question

--Dmalcolm 20:44, 15 November 2010 (UTC) This approach seems to have some drawbacks:

  • the user has to send the coredump across the internet to a Fedora site, and the coredump might contain sensitive information. The user has no way of telling if the backtrace will contain sensitive information until the analysis is received back from the remote server
  • the coredump may be rather large (many megabytes); some people may object to uploading many megabytes to a remote site; many people have asymmetric connections to the internet, where upload rates are considerably slower than download rates.

Alternate approach: make the debuginfo sources be downloadable from an internet-visible fileserver, and do the analysis on the user's machine. The rpms could be unpacked on demand. The user's computer would merely be downloading small amounts of public information, rather than sending large amounts of private information. I believe Will Woods was working on something like this.

--Mtoman 18:30, 16 November 2010 (UTC):
  • It is one of the problems. The user has to trust Retrace Server's administrator, that's why only HTTPS communication will be allowed.
  • At the moment Retrace Server uses xz compression. It is able to compress all the information (including coredump) to a suitable size. :For my testing crashes it always was < 7MB, even for 100MB coredumps from OOo or web browser. Other compression algorithms and no :compression (it is not really needed to compress if the Retrace Server is running locally) will be available in the future.
I guess the alternate project you are talking about is DebuginfoFS and I agree it would be better in many :cases. The main advantage of Retrace Server is that it archives all the versions of all packages, so you are able to process crashes :even from not fully updated system. That's why we would like to implement both.

--Dmalcolm 22:50, 2 December 2010 (UTC): why would DebuginfoFS not have access to all versions of all packages? Can't it simply be wired up to Koji's NFS server, and access them on-demand?

--Jcm 05:50, 13 December 2010 (UTC): I fully agree with David. Doing this on the user's system with a simple export of debuginfofs available over the network sounds like a much easier solution, with less security risk too :)