I would like to attach gdb to a dying process, because the program runs in production and I need to debug it there, if I open the program with gdb it slows down and the computers are not that great. I tried to catch signals in the application and attach gdb there but it just works if I send them signals myself. When the program stalls (multi-threaded program, and the main thread gets a deadlock or somehow gets stuck (or apparently stuck)), and the user forces it to quit in the Desktop Environment (LXDE), I can't catch no signal. The program is all python with PySide for the graphical interface. Just care about linux. My idea is to create a kernel driver and try too hook process termination or signals sending in there but since it would be much of a hassle I would like to ask if there is some tool for this kind of thing or some information that I could make use of. Thanks.
Attaching GDB to a dying process in linux
240 Views Asked by user246100 At
1
There are 1 best solutions below
Related Questions in PYTHON
- Spark streaming + kafka throughput
- How to diagnose Kafka topics failing globally to be found
- kafka: what do 'soTimeout', 'bufferSize' and 'minBytes' mean for SimpleConsumer?
- Fail to create SparkContext
- Syntax error on tokens, delete these tokens - kafka spring integration demo application
- How could Kafka 0.8.2.1 with offsets.storage=kafka still require ZooKeeper?
- Message Queues: Per Message Guarantees
- How should a Kafka HLC figure out the # of partitions for a topic?
- Kafka multiple consumers for a partition
- Should Apache Kafka and Hadoop be installed seperatedly (on a diffrent cluster)?
Related Questions in LINUX
- Spark streaming + kafka throughput
- How to diagnose Kafka topics failing globally to be found
- kafka: what do 'soTimeout', 'bufferSize' and 'minBytes' mean for SimpleConsumer?
- Fail to create SparkContext
- Syntax error on tokens, delete these tokens - kafka spring integration demo application
- How could Kafka 0.8.2.1 with offsets.storage=kafka still require ZooKeeper?
- Message Queues: Per Message Guarantees
- How should a Kafka HLC figure out the # of partitions for a topic?
- Kafka multiple consumers for a partition
- Should Apache Kafka and Hadoop be installed seperatedly (on a diffrent cluster)?
Related Questions in MULTITHREADING
- Spark streaming + kafka throughput
- How to diagnose Kafka topics failing globally to be found
- kafka: what do 'soTimeout', 'bufferSize' and 'minBytes' mean for SimpleConsumer?
- Fail to create SparkContext
- Syntax error on tokens, delete these tokens - kafka spring integration demo application
- How could Kafka 0.8.2.1 with offsets.storage=kafka still require ZooKeeper?
- Message Queues: Per Message Guarantees
- How should a Kafka HLC figure out the # of partitions for a topic?
- Kafka multiple consumers for a partition
- Should Apache Kafka and Hadoop be installed seperatedly (on a diffrent cluster)?
Related Questions in GDB
- Spark streaming + kafka throughput
- How to diagnose Kafka topics failing globally to be found
- kafka: what do 'soTimeout', 'bufferSize' and 'minBytes' mean for SimpleConsumer?
- Fail to create SparkContext
- Syntax error on tokens, delete these tokens - kafka spring integration demo application
- How could Kafka 0.8.2.1 with offsets.storage=kafka still require ZooKeeper?
- Message Queues: Per Message Guarantees
- How should a Kafka HLC figure out the # of partitions for a topic?
- Kafka multiple consumers for a partition
- Should Apache Kafka and Hadoop be installed seperatedly (on a diffrent cluster)?
Related Questions in PYSIDE
- Spark streaming + kafka throughput
- How to diagnose Kafka topics failing globally to be found
- kafka: what do 'soTimeout', 'bufferSize' and 'minBytes' mean for SimpleConsumer?
- Fail to create SparkContext
- Syntax error on tokens, delete these tokens - kafka spring integration demo application
- How could Kafka 0.8.2.1 with offsets.storage=kafka still require ZooKeeper?
- Message Queues: Per Message Guarantees
- How should a Kafka HLC figure out the # of partitions for a topic?
- Kafka multiple consumers for a partition
- Should Apache Kafka and Hadoop be installed seperatedly (on a diffrent cluster)?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
There might be a way to do what you want, but if you can't perhaps it would be sufficient to freeze the program and inspect its memory image?
Enable core dump file generation before it starts, and then once the process is hosed, terminate it with kill. Then use gdb to open the core file and analyze what was happening.