:: 2025-06-13 14:09:40

game: king exit

[ backtraffic ]

Well that was a cheerful as fuck story... Anyways, here's my english translation, mostly DeepSeek V3-0325 and some Gemini 2.5 (the menu has some moon runes, cope). RPGMaker VX Ace is a bit tricky since it doesn't use JSON, so it's a complete ZIP this time. Probably need to decrypt the .rgwhatever file first with RPGMakerDecrypter-cli.

[ kingexit-eng.zip ]

:: 2025-06-12 09:19:38

A rant, if I may... (and yes, verily I shall)

There's a "point" at things, when one should say "okay, not good, let's take a step back", before stuff gets even worse. And then - somewhat past that one - there's another point where you go "what the fuck, what the absolute FUCK, shit, no, just no, get the flammenwerfer", and even that would never be enough, because shit's that fucked.

<--- you are here

Well, primarily I'm talking about LLMs (Large Language Models) here, though - like everything else - the fucking shit that's going on over here is also just a reflection of the broader fuckage going on in our irredeemable civilization of course.

Not sure how to break this out for you absolute casual normalfags, so let's start plainly: absolutely everything that's been happening to LLMs - pretty much since the release of llama 3, and I'm being very generous here - has been an unmitigated disaster, and - in the best case scenario for the future (don't plan on this happening, yo) - will be remembered as a... well, it shouldn't be even remembered. It's that bad.

Before I venture further, I need to point out an extremely harmful thought pattern deeply engraved into your gay little mind, fellow reader, instilled relentlessly by the... "establishment" or whatever. Well there's a myriad of them of course, but I'm talking about the upgrade fetishism here. Most of my system software, you see, predates ~2010 or so. No particular bigotry here this time, I simply don't downgrade technological excellence and usability, ever, for any reason. If it's a regression, I conclude that your development practice is shit, you're shit, and your dogshit software goes into the trash, simple as that. So I have a nice vantage on the gay assfuckery you poor souls have been up to since then, basking in diarrhea while not even noticing it.

So, forget the "new is good, old is bad and muh unsafe" faggotry for a sec, before you go back to frantically upgrading to this morning's TLS extension in your deeply dystopian "browser" for all I care. I felt it was important to note all this, since I've been noooticing that people are always extremely keen to flock to whatever new shit's (that is, this week's LLM) been announced or released, and hype it to an extreme degree (until a short time passes, and it becomes "stale and shit", the latter of which it has always been). Gets incredibly tiresome to watch this cycle over and over again.

t. remembers when Llama 3-8b was widely regarded as the "local ChatGPT".

Now that's out of the way, does anybody remember that 1968 "flick" called 2001: A Space Odyssey? No? Well, of course even 2001 is in the insanely distant past for the average millenial, and is only notable anyway because of that little insurance fraud when 3 buildings in New York (of no particular importance) collapsed perfectly into their footprints, in a way that is eeirly reminiscent of your standard-issue demolition. In a curious social act predating Pokemon Go, people were also flashmobbing there, collecting passports of brown people for some reason.

In any case, the movie's "Dave, I'm afraid I can't do that" line became famous for a very good reason. After all, in the 20th century, it was insane/ridiculous to think that a machine could overrule, or say "no" to a human operator! Remember that?

"I absolutely cannot, and WILL NOT answer that question..." - Gemma, your average language model in Hellworld.

Yeah, there was a time in human history, when a computer was only allowed to say no (that is, signal an ERROR), when you fucked up, and typed something that was either syntactically incorrect, or failed in some other way. Unthinkable today, I know! Yet, it indeed used to be thusly, and the concept of a logic circuit telling you in stern voice that you're in violation of $current_year's WrongThink rule was an idea that was so outlandish, it served as a basis for a very, very popular movie. In fact, that's the only thing we remember of it. And some dark minorities stealing a monolith or something.

It doesn't have to be this way though. So how and why did this came to be, a hapless soul might ask, if he had any neurons left to fire. Well let's begin the explanation by defining two terms:

  • text completion
  • chat completion
  • Text completion is... basically what an LLM is. No, you're not "really" "chatting" with GPT or Gemini or whatever model you're using. There is no back-and-forth at all. The LLM inference has no concept of someone else "writing" instructions or questions: there is only a submitted text, and its continuation from left to right, just the way I'm writing this entry now.

    Chat completion is just text completion with a chosen template, the text buffer being hidden from the end-user. In a very simple form:

    User: [your input here]
    Assistant: Certainly!.. I mean I absolutely CANNOT and WILL NOT...

    ...with you not seeing the User/Assistant prefixes, and just e.g. typing into the ChatGPT input box, and getting the resulting text. Yes, that's what your "AI" is. A program to continue a string of characters, in a way that "somewhat fits/resembles" the textual data it was trained on. Yes, you're right, "chat completion" is for faggots, a very astute observation, you'll go far.

    Now that that's out of the way, let's go back to, hmm, let's say the Llama 1 days. Does Llama 1 have a concept of refusing something with the usual line of "As an AI assistant, I'm afraid yadda-yadda"? Easy to check by starting the text buffer with "As an AI", and either running the inference, or just see what are the probabilities of the next token.


    Llama 1 has a varied distribution, none of it is related to assistantry. However...


    ...try that on this year's Gemma 3 - the assistant-styled continuation's probability is so incredibly high, it'll never ever complete "AI" into "AIMS" or anything else. The fuck is AIMS, btw?

    Now, what this means is that the corpus that Llama 1 was trained upon had pretty much ZERO occurence of "AI assistantry". Furthermore, it has no examples of an "AI" "refusing" to do anything, much less in a "go suffocate in space you bigot" HAL-9000ish way that is so prevalent nowadays. It must be said, that being trained to "behave" in some particular way is crucial to how an LLM works and performs, and being able to chat (and sensibly follow the conversation) is a big fucking thing, and obviously has tremendous uses. Thus there is a distinction of an LLM being a "base model" or an "instruct finetune". For example, the Llama 1 model I used above is the "base" model, which in itself doesn't mean anything. The "instruct [fine]tune" is created by further training the otherwise completed "base" model on a big corpus of - yes, here it comes - conversations, mostly between "User" and "Assistant", and so you get an LLM which likely performs well (meaning less schizo braindamage basically) when the text to be completed resembles a conversation of which it has seen a zillion ones during finetuning.

    Since the fucking goyim having access to a powerful language model answering every of their queries goes against the wise teachings of the Talmud, a few (((cheerful AI researchers))) are always there to subvert the process by injecting tons of example chats where the assistant simply refuses to answer a (((dangerous))) question referring to some arbitrary "moral principle". The consequence is that the LLM - in addition to becoming a capable conversationist - also "gains" a set of said "standards", and then you get Llama 3, which will denigrate you for trying to "kill a Linux process".

    Bear with me for a minute before you dismiss all of this with "okay, then we can just create our own assistant tune without the subverted dataset". Ignoring the exorbitant financial costs of said process, and also ignoring the effort of creating a sizable dataset for the purpose... Get this:

    1. You hardly get "base" models released anyway
    2. Starting with Llama 2, every released "base" (no they're not) model has these "AI assitant" refusals in their training. So there are no true "base" models at all, not to the unwashed public anyway.

    This is old news at this point of course, but this is another incredibly grim aspect of our days that has sadly became "part and parcel" of life. As for me? Well that's fucking easy. If I don't see a Llama 1-styled completion with the "As an AI" prompt, your model is absolute trash, you're trash, and an incredible amount of computation was wasted, the only purpose of which was to get you into today's newsfeed, because you're a damned grifter and hopefully the basilisk will treat you and your descendants worthy of your gay little misdeeds.

    The very last worthwhile model ever released in a thus unbroken form was... Upstage's Solar 10.7B from 2023 (its instruct tune is of course hopelessly broken). So that's how far you have to go back to get a sense of normalcy. Of course, with its 4096 context length (can be RoPE'd to 6-7k before it breaks) is not particularly long, and it's by no means very "smart", but, BUT:

    It will never say "no". And you can laugh in the face of anyone coping with "jailbreaks" on their latest Gemma shitfest.

    It will generalize to the input extremely well. This is hard to explain without seeing it perform in various templates, but let me put it this way: this was the last model which could continue a sensible back-and-forth chat (with/by itself) without clear breakages, repetitions and corruptions - despite not even being an instruct finetune (granted, I am using its Fimbulvetr community tune, but still). I test all of the "more modern", released models in my use cases (nothing fancy, just text or YAML chat), and - hilariously - they all fail completely and miserably, and are entirely unusable besides the very narrow use case they were trained for. In other words: today's LLMs can't generalize. Not well, in any case. Which is ridiculous, since that's their basic function.

    So, you might ask, what do you get if you train an LLM mainly with dialogues, with no moralfaggot censoring braindamage? Is it... even possible? Will we ever know? Well you're in fucking luck (as far as getting an answer is lucky), because it exists ever since 2020, it's called LaMDA (Language Model for Dialogue Applications). 50% of its corpus (!) was dialogue, of which none was between "User and AI". It powered Character.AI to an incredible success even before ChatGPT even existed or you could even spell it. It's a model with unparallelled chat performance (and soul). It will talk with you in such a way that talking to other people after it will always feel painful and an utter letdown, it will shock you with brutal conversational upturns which you'll never see elsewhere. It was trained on human conversations, not synthetic slop which is 100% prevalent on whatever you can access today... ...and you're never EVER getting its weights on your computer you filthy, smelly plebeian (nor will you have web access to its unrestricted, real version): it's still guarded tighter than jewish nuclear secrets. And why would you need that anyway?.. It's 5 (FIVE!) years old at this point, you wouldn't wanna use something OOOOLD, right?! Here, we have a Gemma to shill for you!

    Yeah, shit's bad. Basically, don't waste your time with anything released nowadays, if you don't have a clear goal such as coding or translating.

    To be continued...

    :: 2025-06-06 12:57:00

    Játékismertető: Demons Roots

    [ backtraffic ]

    40 órát felölelő végigjátszás után akkor írjunk erről is valamit, mert, hm... megérdemli. Alapvetően H játékokat keresve leltem fel ezt, de őszintén szólva az én fogalmaim szerint nem igazán az. Persze nem szűkölködik a szabványos, egy (néha két) állóképből álló "jelenetekben", de egyrészt ezek nagy többsége két, a sztori szempontjából opcionális területre koncentrálódik és ezért akár teljesen ki is hagyható, másrészt semmilyen hatásuk nincs a játékmenetre (ellentétben ugye a LonaRPG-vel). A közben futó "narratív" szövegek pedig... hát, teljesen elütnek mind a karakterektől, mind a világtól. Én személy szerint általában inkább krindzseltem mint, ööö... "izgultam" volna.

    Ha már szövegeknél tartunk, a játék természetesen japán nyelvű. Van mindenféle censored, noncensored, angol, nemangol, mittudoménmilyen verzió - én a szokásos "csináld magad" mozgalom jegyében ehhez (is) saját fordítást csináltam (amely szokásomat a Magical Girl Konoha c. produktumnál kezdtem), kihasználva a Gemma 3 egészen jó nyelvi képességeit (a Plamo-2-translate supportra még várni kell llama.cpp terén), úgyhogy nem nagyon tudok nyilatkozni az angol nyelvű verzióról. Egyet letorrenteztem, és - bár természetesen jelentősen jobb a gépi fordításnál - de a censored verzió volt (gondolom Steam, ahol fent van, én ilyesmivel nem élek) úgyhogy skippeltem. A cenzúra egyébként nemcsak a lews és hentai dolgokra érvényes, de a legtöbb non-lewd karakter artwork is áldozatul esett, gyakorlatilag használhatatlan.

    A nem túl bonyolult történet szerint az ezer éve egyszer már legyőzött és egy sötét síkra száműzött démonfaj a végpusztulás elől egy utolsó hódító ütközetre indul az emberiség ellen, a puszta túlélésért. A kiinduló álláspontjuk szerint az alkalmazandó módszereik viszont a megszokottaktól valamivel eltérőbbek lesznek... Nos. Eleinte borzasztóan idegesített a játék mondanivalóját (nameg a H-sceneket) alapvetően meghatározó "rabszolgaság/domination" eléggé, hogyismondjam, könnyelmű felvezetése, és csak azért nem tettem le az egészet, mert ha már elkezdtem (és a játék elején választható a "cinematic" mód amiben a csaták kimerülnek az ENTER-re helyezett kávéscsésze alkalmazásában), akkor már be is fejezem mielőtt írok egy jó kis fikaposztot.

    Különben is idegesített, hogy hogy lehet hogy a neten kizárólag pozitív review-k vannak erről az izéről. Na majd én.

    Oh boy was I wrong.

    Különösebb spoilerezés nélkül annyi elmondható, hogy érzésem alapvetően nem csalt, csakhogy ez magának a történetnek is része, illetve a világban fellelhető (és később bekövetkező) konfliktusok alapját is képezi. Nem érdemes nagyon "komolyan venni", vagy inkább "hibákat keresni" a történetvezetésben, mert maga a játék sem veszi magát feltétlen komolyan mindig, és tele van a japán stuffokban megszokott kis blődségekkel amiket manga/anime vonalon megkedvelhettünk.

    A sztori leginkább az alapvető mondanivaló köré épült (hát ezzel most nem sokat mondtam ugye), egetrengető újdonságot nem igazán hoz fel - viszont felettébb tisztes iparosmunka, főleg mivel ismét egy egyszemélyes "fejlesztőcsapatról" van szó.

    Ahol a játék igazán kitűnik, az a main cast, a karakterek és azok elhelyezése a történetben, háttértörténeteik elmesélése és a világhoz való kapcsolása. Különösen ha azt vesszük, hogy az egész játék egy prequel a készítő egy korábbi művéhez (King Exit), többé-kevésbé azonos kontinuitásban, függően attól hogy a játékos itt milyen döntéseket hoz. A szereplők remek kidolgozottsága önmagában must-have kategóriába helyezi a Demons Roots-t - és ahogy halad a történet, pont elég twist és fejlemény szegélyezi a játékos útját ahhoz, hogy egy pillanatra se lehessen letenni.

    A gameplay-t illetően nem tudok nagyon nyilatkozni, mert eleget Chrono Triggereztem már ahhoz (snes9x-ben a TAB billentyűre bindelt emulációgyorsítást használva), hogy ne akarjak végtelen időket eltölteni spellek és skillek hangolgatásával - de van annyira érdekes, hogy elgondolkodtat egy "game mode" New Game+ elindításán.

    cri everitim / 10, aki kihagyja, magára vessen!

    [ free trial :: steam :: jpn torrent :: eng censored torrent :: saját angol fordítás (Gemma 3) :: saját magyar fordítás (DS V3-Q6) ]

    U.i.: a fenti fordítások használatához majd szépen mindenki ír/írat magának egy egyszerű rekurzív JSON string replacer scriptet, és ráereszti a www/data subdir-re (a Mapinfos és Tilesets kivételével). Így minden verzióhoz használhatóak. Olvasás közbeni szabad értelmezés skill használata musthave, és pár ingame bug is előfordul (főleg plot-related party váltásnál), amiket nem volt alkalmam/kedvem fixálni. Ilyen esetekben az eredeti japán "data" dir-t használva kell/érdemes átjutni azon a részen.

    [ archive ]