About me
I am a AI/Data/Software Engineer with 6+ years of professional experience, based in Austria. Currently working as a freelancer in the AI & Data world. In addition, I own a small, early-stage startup company. I graduated Computer Science at Graz University of Technology in Austria, where I worked as a teaching assistant for 2 courses: Computational Geometry and Theoretical Computer Science. I grew up in Banja Luka (B&H) where I finished International Baccalaureate Diploma Programme (IB DP) having Mathematics and Computer Science at Higher Level.
Appart from this I enjoy riding my Ducati way too much to leave it out of this CV.
Work Experience
Since February 2025, I have been working independently on a series of advanced freelance and open-source projects
centered around cutting-edge data infrastructure, LLM systems, and AI-driven applications. My work spans the full stack,
with a strong emphasis on building robust, cloud-native solutions that support high-performance, real-world AI use cases.
Initally focus has been the design and deployment of cloud-native data warehouses, ELT pipelines,
and event-driven architectures using Google Cloud technologies—including BigQuery, Cloud Storage,
Pub/Sub, and Cloud Run. These pipelines have been instrumental in supporting scalable ingestion and
transformation of structured and unstructured data across multiple clients, including use cases where I used GenAI for
document analytics and compliance evaluations.
In parallel, I’ve architected and implemented Retrieval-Augmented Generation (RAG) systems,
combining vector-based search with structured knowledge graphs to power intelligent assistants
and document-understanding agents. I’ve worked with graph databases such as Neo4j to model semantically rich
relationships and with AstraDB (DataStax) for scalable vector search (also pgvector), enabling hybrid retrieval strategies.
Those RAG pipelines have included custom filtering logic, user-specific metadata indexing, and memory caching
mechanisms to improve both precision and latency. This allowed me to use full GenAI potential and solve real world problems.
Additional interest of mine is memory optimization for LLMs, especially in long-term user interaction scenarios. I’ve contributed to the open-source project mem0,
where I developed tools for contextual memory recall and personalized fact tracking, helping AI agents maintain coherent, evolving conversations over time.
This included improving memory pruning logic, adding user-specific search filters, and extending support for time-based retrieval and embedding freshness.
These efforts demanded deep involvement across all layers of the stack—from cloud architecture and DevOps to embedding strategies,
indexing policies, and prompt tuning for both OpenAI and Google Gemini models. These systems are designed for production-level reliability,
with low-latency retrieval, versioned memory stores, and automated document ingestion flows. This hands-on experience has equipped me
to solve real-world challenges at the intersection of data engineering, LLM application development, and knowledge-centric AI infrastructure.
In addition to my previous Data Engineering/Tech Lead responsibilities, I was managing a relatively small team of 8 data engineers and data scientists. On top of this, I was acting as a product owner, solution architect, and a senior engineer on several Data & GenAI projects (“Dynamic Pricing”, “Chat Bot”, “Competitor crawling”, “AI Vector Search”, etc.) where my team was in charge but was also collaborating on the cross-department level within company.
As a Tech Lead for Data & Analytics team in an e-commerece company my job was to design and build Data Warehouse,
that has custom layered architecture and is metadata driven. Technology stack I included in this process
is based on Google Cloud tools (including BigQuery, Cloud Run, Workflows, Dataproc, Dataform, etc.), Docker, Teraform, Python and Scala.
As a tech lead and a senior engineer in young and expanding team, appart from knowledge transfer, my role requires
carefull plannig of future projects, acquiring new technologies, and managing external colaborations.
On top of this I am also responsible for building and maintaining various data pipelines, APIs, and internal tools,
as well as co-leading SEO (Search Engine Optimisation) and Session-based Recomendation System projects.
End goal of our team is data driven decision making throughout the company.
Designed, build, and implemented Web Application that utilizes QR and NFC technologies.
The app is based on PHP, JavaScrip, JQuery and MySQL, and it is enabling you to order food/drinks from the menu and contact your
bartender by simply taping the NFC tag on the table with your smartphone. It is not only the preview of the menu, but rather a
full ordering system with the integrated options to pay immediatelly.
Link might be broken, but I will gladly give you a tour.
Many of the same mechanisms and modules were later used to build another app used mostly in festivals to purchase drinks with prepaid cards.
Worked as a Data Engineer for Enterprise Data Warehouse (EDWH) project at Raiffeisen Bank International. My responsibilities ranged from managing, developing, and expanding Java and Teradata applications (ETLs) to actively participating in projects transition to AWS ecosystem.
As a part of newly formed Big Data and Analytics team I was reponsible for designing and implementing Near-Real-Time and Batch processes in the cloud using Scala with Apache Spark. In combination with many different technologies such as Spark, Kafka, HBase, Hive, Impala, Solr (Apache Hadoop ecosystem), Jenkins, etc. we developed various kinds of processes with the purpose of having a central place (Data Lake) for all the transactions within Wirecard. We used Cloudera platform to host our Data Lake and pipelines. Unfortunately, a company that I enjoyed very much went insolvent when some members of the board decided to conduct international financial scandal.
I worked as a Software Engineer in a Jungheinrich Systemlösungen GmbH. I was developing and further improving Warehaus Management System (WMS). WMS is an intralogistics software solution, that manages, controls, and optimizes the warehouse. My job mainly consisted of implementing additional bussines logic to the WMS itself.
I was in charge of mentoring the students when they have problems solving the homework. On top of that I designed and graded those homeworks and exams. I did this job for two courses mentioned above: Computational Geometry and Theoretical Computer Science.
The goal of my internship was to build a new internal Task Management System for the company. The idea was that their trusted customers have access to company's TMS. In addition to data representation and manipulation, the system included Login/Register option (with multiple levels of priviledges) and Chat system.