en
Feedback
Gusarich's thoughts

Gusarich's thoughts

Open in Telegram
1 010 043
Subscribers
-33524 hours
-2 9067 days
-15 23930 days
Posts Archive
Repost from TgId: 1755815351
Photo unavailableShow in Telegram
TON Vanity Meet the new blazingly fast vanity address generator for TON smart contracts! The previous state-of-the-art solution was released 3 years ago and hasn't improved much since. We at TON Studio decided to develop a completely new solution from scratch, following the usage patterns the previous solution introduced. Optimizations in both the smart contract and the kernel lead to extreme generation speedups. In realistic usage scenarios on an RTX 4090, the generator finds suffix patterns up to 5,100x faster, and prefix patterns up to 3,600x faster. In practice, this allows finding about 2 more letters than before in appropriate time. Apart from the crazy speedups, there's a simple interface for TypeScript usage, like in smart contract tests and deployment scripts. Applying the generated vanity address requires writing just a few extra lines of code. The generator's output was improved too, storing all results in a structured JSON format with all useful metadata. Quality-of-life improvements include a polished CLI experience, comprehensive test coverage, and a benchmark script for comparing future optimizations. The tool will be maintained and improved over time. The entire development process is open on GitHub, and contributions and feedback are welcome. A detailed write-up covering all optimizations and the development process will be published soon, for those who are interested. Check it out: https://github.com/ton-org/vanity
Show all...
πŸ‘ 7❀ 1πŸ₯° 1πŸ‘ 1
My personal opinion based on experience: * GPT-5.1 has the best instruction following, strong agentic capabilities, and very good skills in math, coding, and problem solving. * GPT-5.1-Codex-Max has worse general capabilities than GPT-5.1, but is noticeably better for large and complex coding tasks. * Opus 4.5 has the best implicit intent understanding, very good instruction following and agentic capabilities, but lacks depth in its reasoning that is required for complex problem solving. * Gemini 3 Pro has the best raw intelligence, especially in math, and has good agentic capabilities, but lacks instruction following. So, the choice becomes quite simple: * For well-defined general tasks, go with GPT-5.1. * For well-defined coding tasks, go with GPT-5.1-Codex-Max. * For less defined or ambiguous tasks, as well as general agentic scenarios, go with Opus 4.5. * For math and problem solving in general, go with Gemini 3 Pro, but either pair it with one of the above or work on extra scaffolding. I'm personally using GPT-5.1 and its Codex variant mostly, but sometimes trying Opus 4.5 and Gemini 3 Pro when the task fits them. I got used to the way you have to use GPT-5.1 and it gives very good results in any task I throw at it, if it's defined properly. For vibe-coding and front-end, I'd go with Opus 4.5 for its intent understanding in more ambiguous cases. All these models are roughly in the same pricing league, but OpenAI and Google also have stronger beasts: GPT-5.1 Pro and (upcoming) Gemini 3 Deep Think. These are only available in expensive subscriptions for $200/$250 a month, and are very slow. But I'm still using GPT-5.1 Pro almost daily for better results in tasks requiring reasoning. A very common scenario in my work is to throw all the context about the task into GPT-5.1 Pro and ask it to write a detailed implementation plan, then give that plan to GPT-5.1-Codex-Max to implement. It works out very well, especially if you do a couple of follow-ups with GPT-5.1 Pro to refine the plan and sync it with your intent better. Gemini 3 Deep Think is a similar thing, and it will probably be even better for complex reasoning tasks, but due to the lack of instruction following in Gemini models and the fact that it's behind another $250 paywall, I'll stick with ChatGPT Pro for now.
Show all...
star reaction 124❀ 6πŸ‘ 5⚑ 1🍌 1πŸ“ 1πŸ’‹ 1
What LLM to use today? Many major releases occurred in the past weeks. The current frontier consists of models that a couple of months ago were only rumors. And they are great. OpenAI has GPT-5.1 and GPT-5.1-Codex-Max; Anthropic has Opus 4.5; Google has Gemini 3 Pro. I'm often working with code, and therefore I need a good coding model. I look at coding benchmarks, like SWE-bench, but scores there differ by just a couple of percent. Are there just no leaders for coding right now? It's just that most benchmarks that companies show in release posts aren't really useful. Opus 4.5 having a 3% better score on SWE-bench doesn't mean that it's 3% better at all. In fact, it doesn't tell you pretty much anything that could translate even remotely to real-world usage. It could be much worse, or much better, or the same. But this benchmark is still one of the most referenced ones when talking about coding performance. There are more benchmarks like this, not only for coding. And they are also misleading. So, how do you know which model is better then? Aren't benchmarks supposed to show that? Well, benchmarks show exactly what they test, and nothing else. SWE-bench (verified), for example, has 46% of tasks from a single Django repository, 87% of tasks are bugfixes, and 5-10% of tasks are just invalid. I wouldn't say that this benchmark somehow shows general capabilities in coding. And definitely it doesn't show capabilities in vibe-coding scenarios. What it mostly shows is the ability to locate a bug in a Python repository by being given a bug report, and fixing it in one shot. What matters for real-world coding then? From my experience, instruction following skills and agentic capabilities are the two most important things. A model is useless if it cannot consistently do what you tell it, and it won't be helpful in complex scenarios if it is not agentic. But honestly, at the moment I don't know any *good* benchmarks that evaluate this in diverse environments. How do you pick a model to use then? The first thing I'd recommend doing is to just try all of them in some real tasks you need to do. It doesn't necessarily have to be the same task, and not even the same complexity. You just have to be honest with yourself and think about how each model works with you in these tasks. Does the model understand what you want from it? Does it complete the task the way you want it to? Does it piss you off less than other models? Does it feel good? If the answers to most of the questions above are "yes", then just go with that model. At least it will feel better than others for you. Models have different skill distributions and personalities. That's why many people have very different opinions about models even though *they are all good*. I believe that to take the most out of AI capabilities, you have to really understand what they are good and bad at, and use models based on use case. As I said earlier, the current state of coding benchmarks is bad, and therefore I'll be talking solely from my experience next.
Show all...
❀ 2πŸ‘ 2⚑ 1πŸ“ 1πŸ’‹ 1
Show all...
star reaction 1πŸ‘ 4❀‍πŸ”₯ 1❀ 1⚑ 1πŸ₯° 1
There is nothing out-of-distribution AI turned out to be very simple if you think about it. You just make a model that works with something generic, and feed as much training data as you can into it. The generic data I mean here is text. People had writing for thousands of years and the whole world is built on it β€” we write, we speak, we read, and we listen our entire lives. It's so deep in our brains that it's hard to think of something that cannot be described in text. Some AI pessimists love to say that "LLMs do not generalize out-of-distribution", but if they are built to understand and write text β€” they can generalize to anything that can be described as text, so pretty much everything. What people actually refer to when pointing out to generalization problems is *intelligence*. Why couldn't LLMs solve simple puzzles 2 years ago if they can generalize? The reason is simple: LLMs were stupid. Bad data, bad models, and bad training produced models with very limited intelligence. They simply lacked IQ to generalize well. Nothing was changed conceptually on a fundamental level in the past couple of years β€” we just gathered better data, designed better architectures, and improved training algorithms. Things that LLMs fail right now usually aren't "boolean" in nature. If it only solves 1% of some specific tasks right now, it means that it *can* solve them sometimes and next generations of models will solve more. And it's hard to find a task where LLMs couldn't complete at least the simplest version of it today. Inability to solve 100% of given tasks of some kind right away just means the lack of intelligence. In order to generalize some problem, you need the intelligence to understand and solve that problem well. There is no data limitation in LLMs in the sense of generalization ability, only an intelligence limitation that is being improved rapidly. I believe that LLMs are one of the valid paths to ASI. There could be other paths, even better and more general ones, but LLMs can do the thing too. ASI is not far away already, and from what we'll see in 2026 we won't have to change much to reach a superhuman level of general intelligence. I don't see anything that could stop LLMs from progressing further at the current exponential pace.
Show all...
star reaction 1πŸ‘ 6❀ 4πŸ₯° 2⚑ 1🀩 1
Photo unavailableShow in Telegram
There is no singularity When mentioning singularity, people often think of some "point" in time when AI progress starts to speed up exponentially very quickly with no human control and it all kind of converges to infinity and we don't know what will happen the second after. And I myself had a similar picture in my head too, until recently. I was thinking that predicting anything after 2027 is impossible because of this "singularity" that I thought would happen in that period. I decided that I should not plan anything long-term and just focus on short-term decisions, while leaving the rest as is. And I kind of did not get why OpenAI is planning moves for several years ahead like nothing changes even though they seem to believe in ASI. But now I have changed my understanding of it. There will be no singularity. You can plan for years ahead, as before. Everything will go just as expected on the scale of humanity. There is a 4-month-old post by Sam Altman, The Gentle Singularity. He talks about how the singularity won't be a certain point but gradual progress. And here's a quote I'd like to highlight right now:
We are climbing the long arc of exponential technological progress; it always looks vertical looking forward and flat going backwards, but it’s one smooth curve.
I kind of understood it on the first read, but not all the dots did connect in my head at the time. The key is that you should not think of AI as some non-canonical event. This is just one of the many steps humanity takes while progressing. It will speed up the overall technological progress significantly, but it's just the same as other major advances did. From paper to the worldwide web. All major advances sped up the overall progress, but that is just how exponential progress works. New advances speed up the progress towards more advances. AI is not an exception to the overall trend here, even though it can feel like one. We won't enter a singularity in a way some people think. It all will be just as usual. New advances will happen every day, same as now. The progress will speed up, same as it always did. The whole "self-evolving AI" thing is no different from, for example, how the existence of the internet allows improving the internet itself. To make it all more clear, you can think of an actual exponent. You can pick three points and scale the chart in a way that makes it feel like the last point is "far away" from the previous two, and that the difference between these first two points is minimal compared to the last. We are at that middle point right now. And we are always on it. Whatever we imagine to happen 10 years from now feels much less realistic than whatever happened in the last 10 years. And that's normal. Then, if you move through that exponent, what felt "impossible" now stays on the left tail and feels like it's not that significant. And again, new possibilities open for future advances that again feel much harder to achieve than before. But actually it's all just how it naturally works. And that is what humanity has always experienced. And there is some chance an evil ASI kills humanity, for sure. But there was that chance with many major advances, like when humans made an atomic bomb. And it never stopped humanity from moving forward. There's no point in stopping. We should keep accelerating while considering all the risks.
Show all...
star reaction 1πŸ‘ 13❀ 5⚑ 2🀑 2πŸ”₯ 1πŸ‘ 1
Writing with AI If you scroll up through my Telegram channel, or open my first blog posts, you will easily notice how they were completely written with AI. Those very obvious patterns that are very easy to spot, like "this isn't just X, it's Y". I was mostly writing drafts myself, but using AI to "finish the paragraph", and also to completely rewrite the whole draft in the end for the purpose of fixing grammar errors and "improving writing". I'm now ashamed of that. A lot of time has passed and only recently I stopped using AI like this. Writing has to come fully from myself, otherwise it makes no sense. It also helps to actually think more about what I write. Before I could just drop a bunch of random thoughts into AI and ask it to write a nice post out of it, but as a result I skipped the whole stage of thinking in a structured way that happens when you write things yourself. I think I changed my mind on all that after seeing hundreds of fully AI-generated posts on X for months. They all look the same. My first posts look this way too when I reread them now. It's soulless, feels cheap, and often provides less value to readers. I'm not using AI this way anymore, at least for things that actually require some thinking from me. I'm now only using it to fact-check, spot grammar errors and give feedback, but never to rewrite whole chunks of text or to finish my thoughts. I'm writing everything myself, then asking AI to give feedback to me, then changing things myself. For random posts on X, I don't even do these last steps. But I definitely use AI for boilerplate stuff, like prompts for AI itself or some Slack messages. Those things don't require much thinking from my side anyway. And I think for such cases, it's fine.
Show all...
star reaction 100πŸ‘ 12🌚 4❀ 3⚑ 2😁 1πŸ“ 1πŸ’‹ 1
This February I made a prediction that AI would surpass top human performers in competitive programming by the end of the summer. For context, at that point, the best models we had were o1, Sonnet 3.6, and Gemini 2.0. They couldn't reliably solve even simple tasks, and the only hint we had was the late-December preview of o3 (which eventually turned out to be a different version from what we got in April). This "o3-preview", when benchmarked on Codeforces contests, ranked in the 99.8th percentile, which is extremely good, but it was still far behind top performers, positioned at about ~130th place. In the past couple of months, Google and OpenAI have been showing off their experimental reasoning models' skills by participating in various math and coding olympiads. Just today, they both posted about their results at ICPC, which is the most prestigious competitive programming olympiad, where university students from different countries participate in teams of 3. Google's system solved 10/12 problems, which is enough for a gold medal already. And OpenAI got a stunning 12/12, which is even more than the top-1 team from Saint Petersburg solved. Basically, AI could solve all those extremely difficult problems given the same time and constraints as human contestants, and actually achieved first place, which I think can count as a successful resolution to my prediction. Even earlier, in December, I made a more general prediction for 2025, saying that 1. AI would surpass humans in different technical fields (mostly meaning math/coding); and 2. AI would discover a lot of new science. We still can't say that AI has "surpassed" humans in all those things, and it's actually quite hard to define that outcome. And AI hasn't discovered much this year either, but I think with these recent results in various olympiads we're definitely getting there very quickly. Overall, I still believe in those timings, and the general timing for ASI in 2027, or even earlier.
Show all...
πŸ”₯ 11πŸ‘ 7🀑 4πŸ“ 3⚑ 2❀ 2πŸ₯° 2😁 1🀣 1πŸ’‹ 1
Why did many people have a bad first impression of GPT-5? Actually, the reason behind that is absurdly stupid. OpenAI fucked up with UX. That's it. The model is actually good; all variants of it are. But OpenAI rushed the release for some reason, and their goal of making the UX better made it worse for a lot of users. The key detail here was the model router that they added to ChatGPT so that users don't have to manually choose a model, and it can just choose the appropriate one on its own. For example, if you ask it how to pronounce a word, that can easily be answered with a non-thinking model, with lower latency and the same accuracy. But if you give it a math problem, ask something about coding, or just generally give it a task that requires more reasoning - it is better processed by a thinking variant of the model. And the idea is good, especially for the average user who doesn't know much about how these models work and doesn't want to think about which model to choose for every query. But the implementation was very bad in the first couple of days, and OpenAI confirmed that themselves. The router was working poorly, not choosing a thinking model for complex queries when needed, and not only that, the information about which model answered the query was also hidden, so, for example, when your request (as a free/plus user) was routed to "GPT-5 mini", you couldn't know that. There's not even a "mini" model in the model picker. And another factor is the "reasoning effort" parameter that OpenAI's models have. It determines "how hard" the model thinks before answering. If you need quicker answers, choose "low"; if you need more reasoning for more complex tasks, use "medium" or "hard". And the thing is that in ChatGPT you can't choose that setting yourself. It's part of the router, too. And the information about this setting is also hidden from users. It turned out that most of the requests from free/plus users were processed either by a non-thinking model, or by a thinking model but with the "low" reasoning effort setting. And sometimes also by "mini" models when limits for the regular ones were exhausted. And the performance is, expectedly, bad under those circumstances. So, even when paying users tried out GPT-5, they were often getting bad results. And that was their impression of GPT-5. And that's why there was so much hate for it online. But OpenAI is fixing those problems, and some are already fixed. So, if you tried out GPT-5 in the first couple of days and didn't like it, consider trying it out again now or in a few days, as it might be much better.
Show all...
star reaction 1πŸ‘ 6πŸ”₯ 5❀ 4πŸ‘ 3πŸ€” 2⚑ 1πŸ•Š 1😐 1
My impression of GPT-5 This was an extremely anticipated release. Literally the whole AI bubble waited for it and watched closely. It's been 2 years since GPT-4, and people expected something extraordinary. Me too. I raised my expectations for GPT-5 in the past few months - hoping that it would basically be "o4" but under a new name. And I expected a capability jump similar to the jump from o1 to o3. I was also watching the whole rollout extremely closely and had tried out GPT-5 before the official release for a few days. First, when it was being tested on LMArena under the codenames "Zenith" and "Summit", and another time when it was available on Perplexity due to a bug. I didn't try it out heavily on real tasks in those days, but I still sent many prompts for testing purposes. And I had a "taste" of it at that time. It felt similar to o3 in vibe, but smarter, more precise, and just generally better. My expectations rose again after trying it out there. I was almost sure it would be an "o4" moment. But then the day of the release came, and I was watching the livestream. It was boring. I turned it off halfway through. And I didn't even try GPT-5 that day and went to sleep. I was already disappointed with the boring presentation. The next day I was scrolling X and looking at feedback from other people, and it was mostly bad. People said it was either on par with o3, or a step down. Then I finally tried it out. And it actually felt better than o3. And GPT-5 pro felt better than o3 pro. I still can't exactly say *how much better* they are, but it's definitely noticeable and significant in many scenarios. It's hard to notice a difference in simple casual chats, but once you give it something complex or put it in an agentic environment - you'll see how it just does a better job than o3 could, and in many cases - much better than any other model could. It also translates to agentic coding. For a whole month beforehand I was extensively using Claude 4 Opus for coding in Claude Code full-time, and it was great. I liked that model and its taste. It was nice coding with it. But honestly, it was pissing me off very often. And so I tried downloading the Codex CLI with GPT-5 inside. The UX of the CLI itself is poor compared to Claude Code at the moment. It is not that developer-friendly. But after trying to code with GPT-5 the same way I did with Opus, I started to notice how GPT-5 was just better. It's not always about the quality of the code, and definitely not about the speed. My first impression is that it not only writes better code overall, but that it's much better at instruction following and tool calling. Those are the things that people liked the most about Claude models. And I liked that too. And many people thought that no model would match Claude in these metrics anytime soon. The thing is that GPT-5 just follows your instructions extremely precisely. And it doesn't do things you don't ask it to do. Claude was pissing me off so much by starting to go off track from instructions in long coding sessions, or even in simple queries when it just did something I did not ask it to do. GPT-5 is just better in this regard. Sometimes it follows instructions so well that you understand that your instructions were bad. And it works so well with long context. I can mention something once early in a coding session, and then I just see how it still remembers, references, and follows that for so long. Opus was missing those things very often, especially in long sessions. It might sound like too much ass-licking for OpenAI, but that's my honest experience with GPT-5. I was sceptical too, especially after seeing that boring livestream and seeing so much hate on X. But after trying all of it out myself, I was really amazed. Is it "o4" level? I'm not sure. More like o3.5.
Show all...
star reaction 1πŸ”₯ 8πŸ•Š 2❀ 1⚑ 1
Show all...
πŸ‘ 4⚑ 1❀ 1πŸ”₯ 1πŸ“ 1πŸ’‹ 1
The Complexity Threshold of AI We see dozens of new LLMs heavily tuned for software engineering tasks, and they're becoming very good at it, very quickly. As models evolved, I started using them more and more for writing code, eventually reaching a point where I almost completely stopped writing code myself. The last time I wrote code manually (or rather, with AI-assisted tab completions) was around four months ago. However, once tasks become larger and more complex, these models quickly become inefficient. They seem to have a certain complexity threshold, beyond which their efficiency rapidly declines. I was mostly using AI either to quickly take projects "from 0 to 1" by iterating on MVPs, or to build small Python scripts for working with LLMs and data. About a week ago, I needed to rapidly build another MVP while iterating on ideas, so I used Claude Code and completed the whole thing within a single day. I wanted to keep developing it, but the code became so messy that changing or adding anything was nearly impossible. Even minor adjustments, like updating the UI, caused other parts to break. At that point, I decided I was done with this MVP and needed to re-implement everything from scratch with better structure and architecture. When I started the second implementation attempt, with my "plan" ready, I gave it to Claude Code and watched it fail for hours. It was producing code, but it didn't match my vision and wasn't working as expected. Many architectural and code-level issues remained. I tried slightly adjusting the plan and re-implementing everything multiple times over the span of three days, but it didn't help. After three unsuccessful attempts, I almost lost hope. But then I decided to spend more time refining the specification itself before starting the implementation. I spent two days writing and iteratively refining the specification through feedback loops with the smartest models, while also giving my own feedback on each part. It covered almost everything needed for implementation, from high-level architecture to logic for specific use cases and solutions for the hardest implementation parts that I struggled with the most. Suddenly, when I gave this new specification to the agent and started slowly implementing things one by one, it just worked. I told the agent to implement the next thing, waited 10 minutes, tested the implementation, asked it to fix any issues, and then moved to the next step. A few times during those two days, I also asked another agent to carefully read the entire codebase and strictly compare it against the specification, then passed its feedback back to the first agent to resolve any differences. After about two days, the project was mostly finished. It still has some rough edges (which are easy to address), and I haven't thoroughly *tested* everything yet (I even decided not to write any automated tests at all at this stage), but all the core functionality just worked. When I asked Claude to change something, it usually did so accurately, without breaking other parts. The thing is that AI is much better at following instructions than at coming up with practical stuff on its own. Smarter models can handle more complex tasks independently, but in larger projects, their limit of "acceptable complexity per step" is lower. Therefore, when working on a bigger and more complex project, it's important to keep all individual steps at the same level of complexity and depth as you would when working on smaller projects. The complexity of each individual step matters more than the complexity of the whole project.
Show all...
star reaction 3❀ 9πŸ’― 7πŸ”₯ 5πŸ‘ 3πŸ‘Ž 3⚑ 1πŸ€” 1
✍️ New Blog Post: "Billions of Tokens Later: Scaling LLM Fuzzing in Practice" I've spent months running LLM-powered fuzzing at production scaleβ€”processing billions of tokens, discovering practical scaling laws, and developing effective deduplication strategies. Here’s what I learned along the way: https://gusarich.com/blog/billions-of-tokens-later/
Show all...
star reaction 51πŸ‘ 13πŸ“ 5❀ 4✍ 1πŸ”₯ 1πŸ₯° 1πŸ’‹ 1
πŸ”₯ 4
❀️ After more than a year of part-time work, I have finally joined the @ton_studio Tact compiler team full-time! It has been a great experience. Initially, I was mostly involved with language features and the compiler itself. During our team's early months, this was necessary due to our smaller size. However, as our team expanded rapidly with new talented engineers, I was recently able to shift my focus to tasks that are now more interesting to me. Currently, I am focused on LLM-powered fuzzing for ensuring security and documentation quality. We have achieved incredible results with this approach, and a new blog post will soon be published, sharing insights into the efficiency of different models and the fuzzing methodology overall. I also plan to leverage my expertise in DeFi and smart contracts, gained over several years of successfully implementing and auditing large-scale solutions, to support the team with our DeFi libraries and best-practice implementations of standard smart contracts. Since I now have much greater freedom in my activities and our team has significantly more ongoing projects, I feel better than ever about working full-time and striving to make TON the best blockchain for developers and Tact the best language for building on it.
Show all...
star reaction 191πŸ‘ 35πŸ”₯ 17❀ 8❀‍πŸ”₯ 2🍾 2πŸ“ 1πŸ’‹ 1
Multitasking in 2025 People tend to multitask more and more as technology and society evolve. And this behavior only becomes stronger as AI integrates into our daily lives. We now consume multiple sources of information and do multiple things at the same time. But for some, that can be very hard β€” our brains work differently. The key shift is that now you can actually delegate part of your cognitive load to AI, instantly and effectively, freeing up mental space for more things. If you use AI for two tasks, you can easily handle both at once. Send a message in one window, and while waiting for the result, switch to something else, send a message there too, then switch back and see the first result. Repeat. You’re basically doubling your speed. What were you doing before while waiting for a reply anyway? Scrolling social media? What if you did something else instead? I've never been good at multitasking. Even with simple things β€” like talking while doing something physical β€” I often just stop thinking about one task until I finish the other. I could be putting milk in the fridge while talking to someone and just... stop, fridge wide open, until I finish the sentence, and only then finally put the milk in. But even with that, I’ve still managed to multitask effectively over the past few weeks thanks to AI. Most of the time when I work now, I handle two things at once β€” whether it’s job stuff, studies, writing, or some boring online things like booking hotels and planning trips. I often have multiple ChatGPT windows open at the same time, doing different things. And I like it β€” I like how I can literally get so much more done in the same amount of time. Of course, not all my work is multitasked. Sometimes I enter long stretches of deep focus on just one task β€” and even then, AI still helps a lot. It boosts your efficiency even when you’re doing only one thing at a time. People who are already good at multitasking β€” and constantly generating ideas in their head β€” will experience this the most. What if, instead of just writing a fresh idea into your notes, you could instantly open a new tab and start implementing it, without even losing focus on other things? It’s incredible. And it’ll only get better as AI systems evolve. Remember: we’re still early. Many jobs will eventually transform into manager-like work β€” but instead of managing people, you'll be managing multiple AI agents at once. Even with today’s AI, you can do so much more, in both quality and quantity. One of the best skills to develop right now is the ability to think and read fast β€” it directly boosts your efficiency. You don’t need to master specific hard skills. Instead, learn how to learn. Learn how to adapt.
Show all...
star reaction 1πŸ‘ 27❀ 10πŸ”₯ 5πŸ€·β€β™‚ 2⚑ 1πŸ’‹ 1
✍️ New Blog Post: Documentation-Driven Compiler Fuzzing with Large Language Models I ran a relatively simple black-box fuzzing experiment with a fresh approach on the Tact compiler, using only documentation as input. Found 10 real issues for just $80. https://gusarich.com/blog/post.html?post=fuzzing-with-llms
Show all...
star reaction 1πŸ”₯ 12πŸ‘ 5❀ 4⚑ 2
✍️ My First Blog Post: Measuring and Analyzing Entropy in Large Language Models I benchmarked 52 models across 12 different prompts, with 500 generations per combination, resulting in many interesting charts. https://gusarich.com/blog/post.html?post=measuring-llm-entropy
Show all...
❀ 12πŸ‘ 8πŸ”₯ 4πŸ₯° 2πŸ’‹ 1
Here are charts illustrating the statistics mentioned above with specific numbers.
Show all...
star reaction 44πŸ‘ 10❀ 6πŸ”₯ 6😘 2⚑ 1πŸ“ 1πŸ’˜ 1
3. Technological Advantages and Clear Path Forward Currently, Tact compiles to FunC (the language from which Tolk was forked), rather than directly into assembly. Despite this intermediate compilation step, we've already made significant strides in optimizing gas efficiency. Specifically, for common use cases and typical developer-written contracts (without extreme, manual, low-level optimizations), contracts written in Tact now consume slightly less gas compared to the same logic written directly in FunC. We achieved this through sophisticated, built-in low-level optimizations embedded into our compiler and standard library. Essentially, Tact automatically applies many optimizations that would otherwise require specialized, manual effortβ€”allowing developers to focus purely on the logic, readability, and architecture of their smart contracts, rather than the complexities of gas optimization. Our future plans are even more ambitious: once we eliminate the dependency on FunC and transition to direct compilation into assembly, we'll unlock even deeper and more powerful optimization possibilities. This will further widen the performance and efficiency gap between Tact and alternatives like Tolk. When it comes to features, Tact is already well ahead of Tolk. From day one, Tact was purposefully designed to provide developers with a smooth and intuitive experience, rich tooling, and the ability to effortlessly create maintainable, secure, and scalable smart contracts. Our upcoming releaseβ€”Tact 2.0β€”scheduled for later this year, will further enhance this foundation, introducing even more innovative features, optimizations, and architectural improvements. While Tolk is also moving towards better usability and feature enhancements, Tact's inherent design principles and dedicated roadmap position it uniquely to remain the leading choice for TON smart contract development.
Show all...
πŸ‘ 7❀ 6πŸ”₯ 4⚑ 3πŸ“ 1🍾 1πŸ’‹ 1