Let's say you want to profile your BASIC program, i.e. determine what lines of code it's spending most of its time on, perhaps to optimize it. Here's a technique I came up with.
From the terminal, I LOADed my program. I turned tracing on (TRON). Then, in the terminal program, I turned logging on (I use TeraTerm so I selected File -> Log. Note that by default, TeraTerm appends to the log file. You don't want that; you want it to create a new file. So uncheck the Append box.)
Then I ran my program. The line numbers were printed out as it executed and TeraTerm wrote them to the log file. The program also runs slower, but hopefully the results will still reflect the program's normal behavior. When I had run the program long enough, I hit Ctrl+C and turned off TeraTerm's logging.
Next, I opened the log file in Notepad++. It contains line numbers in square brackets. I did a Replace All, replacing "]" with "]\n", i.e. I added a line break after each number. For Notepad++ to recognize \n as a newline, you need to set the Search Mode to Extended in the Replace dialog.
Then I saved the file.
Next, I opened a bash window (I have Windows Subsystem for Linux on my PC). I ran the command:
This sorts the line numbers, then uniq counts how many times each line number occurs, then the second sort command sorts the file so the most-executed line number is at the top. profile.txt will contain a list of line numbers, with the number of times that line was executed, in order with the most-executed line on top. You can use that as a guide for optimizing your code.
Note that this tells you which line is executed the most, but it doesn't tell you which line consumes the most processor time. It's conceivable that a simple line of code is executed most often, but a more complex line of code consumes more time overall.
I apologize for the Windows-centric nature of my description, and the dependence on the Linux subsystem. There are certainly plenty of other ways to achieve the same results. If you profile your code with a different set of tools, please post a description here.
- Bob
From the terminal, I LOADed my program. I turned tracing on (TRON). Then, in the terminal program, I turned logging on (I use TeraTerm so I selected File -> Log. Note that by default, TeraTerm appends to the log file. You don't want that; you want it to create a new file. So uncheck the Append box.)
Then I ran my program. The line numbers were printed out as it executed and TeraTerm wrote them to the log file. The program also runs slower, but hopefully the results will still reflect the program's normal behavior. When I had run the program long enough, I hit Ctrl+C and turned off TeraTerm's logging.
Next, I opened the log file in Notepad++. It contains line numbers in square brackets. I did a Replace All, replacing "]" with "]\n", i.e. I added a line break after each number. For Notepad++ to recognize \n as a newline, you need to set the Search Mode to Extended in the Replace dialog.
Then I saved the file.
Next, I opened a bash window (I have Windows Subsystem for Linux on my PC). I ran the command:
Code Select
sort teraterm.log | uniq -c | sort -r -n >profile.txt
This sorts the line numbers, then uniq counts how many times each line number occurs, then the second sort command sorts the file so the most-executed line number is at the top. profile.txt will contain a list of line numbers, with the number of times that line was executed, in order with the most-executed line on top. You can use that as a guide for optimizing your code.
Note that this tells you which line is executed the most, but it doesn't tell you which line consumes the most processor time. It's conceivable that a simple line of code is executed most often, but a more complex line of code consumes more time overall.
I apologize for the Windows-centric nature of my description, and the dependence on the Linux subsystem. There are certainly plenty of other ways to achieve the same results. If you profile your code with a different set of tools, please post a description here.
- Bob