Comparison of V and other languages

V was created because none of the existing languages had all of the following features:

Fast compilation D, Go, Delphi
Simplicity & maintainabilityGo
Great performance on par with C and
zero cost C interop
C, C++, D, Delphi, Rust
Safety (immutability, no null, option types, free from data races) Rust
Easy concurrency Go
Easy cross compilationGo
Compile time code generationD
Small compiler with zero dependencies-
No global state-
Hot code reloadingC# (.NET 6+), Dart

Initially I was going to compare V to all major languages, but it got repetitive pretty quickly.

The table above and the list of the features on the home page should give you a pretty good picture.

For example, it's pretty obvious that compared to C++, V is much simpler. It offers significantly faster compilation speed, safety, lack of undefined behavior (wip, e.g. overflowing can still result in UB), easy concurrency, compile time code generation, etc.

Compared to Python, it's much faster, simpler, safer, more maintainable, etc.

You can use this formula for any language.

Syntax comparison:

V for Go developers

V for C++ developers

Since V is very similar to Go, and its domain is similar to Rust's, I left a comparison with these two languages.


V is very similar to Go, and these are the things it improves upon:

— No err != nil checks (replaced by result types)

— No undefined values

— No variable shadowing

— Immutability by default

— Enums

— Sum types (type Expr = IfExpr | StringLiteral | IntLiteral | ...)

— String interpolation: println('${foo}: ${bar.baz}')

— If and match expressions (including sum type matches)

— No global state (globals can be enabled for low level applications like kernels via a command line flag)

— A simple way to check whether an array contains an element: if elem in arr {.

— Only one declaration style: a := 0

— Warnings for unused imports and vars for quicker development without annoying interruptions. But only in development/debugging mode.
Making a production build still requires fixing all of them, thus enforcing clean code.

filter/map/reduce methods for arrays and maps.

— Much smaller runtime

— Much smaller binaries (a simple web server written in V is ~600 KB vs ~7 MB in Go)

— Zero cost C interop

— GC is optional

— Much faster serialization using codegen and no runtime reflection

— Precompiled text and HTML templates unlike Go's html/templates that have to be parsed on every request (or pre-cached and executed on every request) and have to be deployed with the app's binary.

— Fearless concurrency (no data race guarantee at compilation) wip

— No null (null is only allowed in unsafe code)

— Stricter vfmt to ensure one coding style

— Centralized package manager: (v install ...)

— Much simpler and less verbose testing, assert.

— Primitive types can have methods resulting in less verbose code: strings.Replace(strings.Replace(s, "a", "A", -1), "b", "B", -1) =>
s.replace('a', 'A').replace('b', 'B')

- Arrays and maps (and arrays of arrays, arrays of maps etc) are automatically allocated. No more nil reference panics if you forgot to allocate each map in a loop.


Rust has a very different philosophy.

It is a complex language with a growing set of features and a steep learning curve. No doubt, once you learn and understand the language, it becomes a very powerful tool for developing safe, fast, and stable software. But the complexity is still there.

V's goal is to allow building maintainable and predictable software. That's why the language is so simple and maybe even boring for some. The good thing is, you can jump into any part of the project and understand what's going on, feel like it was you who wrote it, because the language is simple and there's only one way of doing things.

Rust's compilation speed is slow, on par with C++. V compiles 1.2 million lines of code per cpu per second.

V vs Rust vs Go: Example

Since V's domain is close to both Go and Rust, I decided to use a simple example to compare the three.

It's a simple program that fetches top Hacker News stories concurrently. (Note, that all examples only use stdlib, no external libs.)


use serde::Deserialize;
use std::sync::{Arc, Mutex};

const STORIES_URL: &str = "";
const ITEM_URL_BASE: &str = "";

struct Story {
    title: String,

fn main() {
    let story_ids: Arc<Vec<u64>> = Arc::new(reqwest::get(STORIES_URL).unwrap().json().unwrap());
    let cursor = Arc::new(Mutex::new(0));
    let mut handles = Vec::new();
    for _ in 0..8 {
        let cursor = cursor.clone();
        let story_ids = story_ids.clone();
        handles.push(std::thread::spawn(move || loop {
            let index = {
                let mut cursor_guard = cursor.lock().unwrap();
                let index = *cursor_guard;
                if index >= story_ids.len() {
                *cursor_guard += 1;
            let story_url = format!("{}/{}.json", ITEM_URL_BASE, story_ids[index]);
            let story: Story = reqwest::get(&story_url).unwrap().json().unwrap();
            println!("{}", story.title);
    for handle in handles {
package main

import (

const STORIES_URL = ""
const ITEM_URL_BASE = ""

type Story struct {
	Title string

func main() {
	rsp, err := http.Get(STORIES_URL)
	if err != nil {
	defer rsp.Body.Close()
	data, err := ioutil.ReadAll(rsp.Body)
	if err != nil {
	var ids []int
	if err := json.Unmarshal(data, &ids); err != nil {
	var cursor int
	var mutex sync.Mutex
	next := func() int {
		defer mutex.Unlock()
		temp := cursor
		return temp
	wg := sync.WaitGroup{}
	for i := 0; i < 8; i++ {
		go func() {
			for cursor := next(); cursor < len(ids); cursor = next() {
				url := fmt.Sprintf(
				rsp, err := http.Get(url)
				if err != nil {
				defer rsp.Body.Close()

				data, err := ioutil.ReadAll(rsp.Body)
				if err != nil {
				var story Story
				if err := json.Unmarshal(data, &story); err != nil {
import net.http
import json

const (
	stories_url   = ''
	item_base_url = ''

struct Story {
	title string

struct Cursor {
	pos int

fn main() {
	resp := http.get(stories_url)!
    ids := json.decode([]int, resp.body)!
    shared cursor := Cursor{}
    mut threads := []thread{}

    for _ in 0 .. 8 {
        threads << go fn (ids []int, shared cursor Cursor) {
            for {
                id := lock cursor {
                    if cursor.pos >= ids.len {
                    ids[cursor.pos - 1]
                resp := http.get('$item_base_url/${id}.json') or { panic(err) }
                story := json.decode(Story, resp.body) or { panic(err) }
        }(ids, shared cursor)

V and Nim are very different. One of V's main philosophies is "there must be only one way of doing things". This results in predictable, simple, and maintainable code.

Nim gives a lot of options and freedom to developers. For example, in V you would write foo.bar_baz(), but in Nim all of these are valid: foo.barBaz(), foo.bar_baz(), bar_baz(foo), barBaz(foo), barbaz(foo) etc.

In V there's only one way to return a value from a function: return value. In Nim you can do return value, result = value, value (final expression), or modify a ref argument.

Features like macros and OOP offer multiple ways to solve problems and increase complexity.

Nim's strings are mutable, in my opinion this is a huge drawback. I'll post a detailed article about the power of immutable strings.

Unlike V, Nim generates unreadable C code with lots of extra bloat. For example:

var users = [
    User(name: "Carl", last_name: "Black", age: 22),
    User(name: "Sam", last_name: "Johnson", age: 23)
If we build this with nim c -d:release test.nim, we get
STRING_LITERAL(TM_R8RUzYq41iOx0I9bZH5Nyrw_5, "Carl", 4);
STRING_LITERAL(TM_R8RUzYq41iOx0I9bZH5Nyrw_6, "Black", 5);
STRING_LITERAL(TM_R8RUzYq41iOx0I9bZH5Nyrw_7, "Sam", 3);
STRING_LITERAL(TM_R8RUzYq41iOx0I9bZH5Nyrw_8, "Johnson", 7);
NIM_CONST tyArray_m9aGbgPB3gZgFcKcDkjg9a8g TM_R8RUzYq41iOx0I9bZH5Nyrw_4 = { { ((NimStringDesc*) &TM_R8RUzYq41iOx0I9bZH5Nyrw_5), ((NimStringDesc*) &TM_R8RUzY
q41iOx0I9bZH5Nyrw_6), ((NI) 22)},

{((NimStringDesc*) &TM_R8RUzYq41iOx0I9bZH5Nyrw_7), ((NimStringDesc*) &TM_R8RUzYq41iOx0I9bZH5Nyrw_8), ((NI) 23)}}

N_LIB_PRIVATE N_NIMCALL(void, NimMainModule)(void) {
        TFrame FR_; FR_.len = 0;
        genericAssign((void*)users_oOczRkVOc3qtKT8rsAJzaw, (void*)TM_R8RUzYq41iOx0I9bZH5Nyrw_4, (&NTI_m9aGbgPB3gZgFcKcDkjg9a8g_));

N_LIB_PRIVATE N_NIMCALL(void, aDatInit000)(void) {
static TNimNode* TM_R8RUzYq41iOx0I9bZH5Nyrw_2[3];
static TNimNode TM_R8RUzYq41iOx0I9bZH5Nyrw_0[4];
NTI_Qp0mfNOzxWdmSSWLHA9cnZQ_.size = sizeof(tyObject_User_Qp0mfNOzxWdmSSWLHA9cnZQ);
NTI_Qp0mfNOzxWdmSSWLHA9cnZQ_.kind = 18;
NTI_Qp0mfNOzxWdmSSWLHA9cnZQ_.base = 0;
NTI_Qp0mfNOzxWdmSSWLHA9cnZQ_.flags = 2;
TM_R8RUzYq41iOx0I9bZH5Nyrw_2[0] = &TM_R8RUzYq41iOx0I9bZH5Nyrw_0[1];
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[1].kind = 1;
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[1].offset = offsetof(tyObject_User_Qp0mfNOzxWdmSSWLHA9cnZQ, name);
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[1].typ = (&NTI_77mFvmsOLKik79ci2hXkHEg_);
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[1].name = "name";
TM_R8RUzYq41iOx0I9bZH5Nyrw_2[1] = &TM_R8RUzYq41iOx0I9bZH5Nyrw_0[2];
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[2].kind = 1;
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[2].offset = offsetof(tyObject_User_Qp0mfNOzxWdmSSWLHA9cnZQ, last_name);
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[2].typ = (&NTI_77mFvmsOLKik79ci2hXkHEg_);
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[2].name = "last_name";
TM_R8RUzYq41iOx0I9bZH5Nyrw_2[2] = &TM_R8RUzYq41iOx0I9bZH5Nyrw_0[3];
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[3].kind = 1;
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[3].offset = offsetof(tyObject_User_Qp0mfNOzxWdmSSWLHA9cnZQ, age);
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[3].typ = (&NTI_rR5Bzr1D5krxoo1NcNyeMA_);
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[3].name = "age";
TM_R8RUzYq41iOx0I9bZH5Nyrw_0[0].len = 3; TM_R8RUzYq41iOx0I9bZH5Nyrw_0[0].kind = 2; TM_R8RUzYq41iOx0I9bZH5Nyrw_0[0].sons = &TM_R8RUzYq41iOx0I9bZH5Nyrw_2[0]
NTI_Qp0mfNOzxWdmSSWLHA9cnZQ_.node = &TM_R8RUzYq41iOx0I9bZH5Nyrw_0[0];
NTI_m9aGbgPB3gZgFcKcDkjg9a8g_.size = sizeof(tyArray_m9aGbgPB3gZgFcKcDkjg9a8g);
NTI_m9aGbgPB3gZgFcKcDkjg9a8g_.kind = 16;
NTI_m9aGbgPB3gZgFcKcDkjg9a8g_.base = (&NTI_Qp0mfNOzxWdmSSWLHA9cnZQ_);
NTI_m9aGbgPB3gZgFcKcDkjg9a8g_.flags = 2;

V can emit native code directly (V's native backend is not as complete as the C backend yet though), Nim can only emit C and JavaScript. It's also possible to embed C code in Nim, which reduces safety and portability.

Nim allows importing functions into global namespace. This becomes a huge problem when working on large code bases. Explicit imports that V, Go, Oberon have are much more practical: pkg.function() vs function().

V's syntax is cleaner with fewer rules. Lack of significant whitespace improves readability and maintainability of large code bases and makes generating code much easier. From my experience of working with a huge Python code base, moving large blocks of code in whitespace sensitive languages is scary.

The list can go on and on. Nim is a language with a lot of features, still developing and changing. V is not going to change much, if at all.

Again, I'm not saying it's a worse language. It's a very different language that offers a lot of options and features. Many developers prefer this approach. And that's ok.