karl’s gpt4 wrapper was Re: [ot][spam]gpt-4 completions

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Mon Mar 20 08:48:55 PDT 2023


On 3/19/23, ytooR <Arpspoof at proton.me> wrote:
> Great work carl! I love you

hihi rooty <3

the code for the wrapper (which i’d like to expand to run locally) is
at https://codeberg.org/xloem/lm . i was generating git commits with
something like the following (after git clone and pip install) (don’t
know if this example works, readme has no history for context):
echo “added a story to the readme” | quickscripts/append.sh README.md
| lm | quickscripts/patchfix.py

logging the gpt4 output makes it easier for others to make open models
that perform the same behavior. when it’s so normal to train models
from data, having data ends up being really helpful. and of course i
always value public logging.

the code you quoted below has a bug which is repaired in the
repository in patchfix.py which is pending simplification.

the nice thing about the generation of git commits this way is that
the model does not need to be instruct-tuned since the git history
provides context. it would work on opt or llama (which are local
models that don’t need network access). gpt4 is pushing the context
length frontier which makes stuff like this become more feasible
because you can shoehorn more git history into a prompt.

the repository was entirely made on my mother’s old ipad, which i am
often near now when for example having trouble moving my body and
standing up. i was thinking of the “hunt and peck” method of software
engineering.

i briefly visited libreplanet and had a photo taken of me while i was
coding this.

i am now ill and work may be paused we shall see.

> ------- Original Message -------
> On Sunday, March 19th, 2023 at 11:46 AM, Undescribed Horrific Abuse, One
> Victim & Survivor of Many <gmkarl at gmail.com> wrote:
>
>
>> {"time": "2023-03-19T14:44:38.939857", "lm": "spellbook", "url":
>> "https://dashboard.scale.com/spellbook/api/v2/deploy/", "app_id":
>> "sm43s5r", "input": "Write a python script to correct the line counts
>> in a unified diff that contains corrupt @@ lines.", "output":
>> "```python\nimport re\nimport sys\n\ndef
>> correct_line_counts(diff_file):\n with open(diff_file, 'r') as
>> file:\n lines = file.readlines()\n\n corrected_lines = []\n
>> line_index = 0\n\n while line_index < len(lines):\n line =
>> lines[line_index]\n\n if line.startswith('@@'):\n #
>> Extract the line numbers from the @@ line\n match =
>> re.match(r'@@ -(\\d+),(\\d+) \\+(\\d+),(\\d+) @@', line)\n
>> if match:\n start_old, count_old, start_new, count_new
>> = map(int, match.groups())\n\n # Count the actual
>> number of lines added and removed\n actual_count_old =
>> 0\n actual_count_new = 0\n line_index +=
>> 1\n\n while line_index < len(lines) and not
>> lines[line_index].startswith('@@'):\n if
>> lines[line_index].startswith('-'):\n
>> actual_count_old += 1\n elif
>> lines[line_index].startswith('+'):\n
>> actual_count_new += 1\n else:\n
>> actual_count_old += 1\n actual_count_new +=
>> 1\n line_index += 1\n\n # Correct
>> the line counts if necessary\n if count_old !=
>> actual_count_old or count_new != actual_count_new:\n
>> corrected_line = f'@@ -{start_old},{actual_count_old}
>> +{start_new},{actual_count_new} @@\\n'\n
>> corrected_lines.append(corrected_line)\n else:\n
>> corrected_lines.append(line)\n else:\n
>> corrected_lines.append(line)\n line_index += 1\n
>> else:\n corrected_lines.append(line)\n
>> line_index += 1\n\n with open(diff_file, 'w') as file:\n
>> file.writelines(corrected_lines)\n\nif name == 'main':\n if
>> len(sys.argv) != 2:\n print(f'Usage: {sys.argv[0]}
>> <diff_file>')\n sys.exit(1)\n\n diff_file = sys.argv[1]\n
>>
>> correct_line_counts(diff_file)\n``\\n\\nThis script takes a unified diff
>> file as input and corrects the line counts in the @@ lines if they are
>> corrupt. To use the script, save it as `correct_diff.py` and run it with
>> the diff file as an argument:\\n\\n``\npython
>> correct_diff.py diff_file.txt\n```"}
>


More information about the cypherpunks mailing list