Since it's developed in UCLA - do they have resources to do the next phases of trials (humans, etc)? I wonder if in such cases pharma (those who obviously have enough money) care about these molecules at all? Since they probably can't put a patent on it and heavily monetize it later. Are there examples of drugs that were developed like this successfully?
It's very much patentable and thus represents an IP asset that can be traded, contracted, exchanged, etc, similar to a "technology invention". UCLA likely owns it or a part of it, so can trade it freely. This is quite very very early for a drug, so there's huge risk it will fail in human trails, so that high risk lowers the value significantly versus a phase III passed drug. There's a kazillion drugs like this, XYZ-123 type format, most of which failed or never reached Phase I, Phase II, phase III, phase IV, or phase kappa trials
Most labs in universities have wildly different finances. Each lab basically has to compete for funds independently of their parent university - few are funded by the school.
It’s very unlikely that a lab equipped to study mice is also equipped for human studies. Those require a lot more money and different expertise and in addition to different equipment and facilities.
Good to see this project on the front page! We are using Vega (specifically Vega-Lite [1] as an engine and templates spec for data-science plots / visualizations in DVC (e.g. how it looks like in VSCode extension [2]).
It allowed us to have:
- same engine in CLI (can generate HTML and open in browser), VSCode extensions, SaaS
- have a way to describe plot visualization / representation as a declarative spec that can be then used in all those products (plot spec). We were exploring plotly and AFAIU there was no easy way to do the same
- it's quite comprehensive and community is responsive, the project is maintained
To name a few downsides from our experience:
- DSL is quite complicated. It requires some time to master it. It hurts the adoption. In our case I don't see that many users doing custom plots / templates - majority is using pre-baked built-in stuff or use Python and export as SVG.
- In our case some features were missing (and are still missing) - exponential average - that is most commonly used to smooth ML training curves.
Hey, congrats! Are you competing / is there some overlap / what are the key differences with Roe AI (YC W24) - roe.ai (just launched recently on HN https://news.ycombinator.com/item?id=41202694 as well).
Jason and Richard from Roe AI are amazing people! We were in the same YC batch and section. Excited for what Roe AI is building and their focus on building a new type of data warehouse.
At Trellis, we're focused on building the AI tool that supports document-heavy workflows (this includes building the dashboard for teams to review, update, and approved results that were flagged, reading and writing directly to your system of record like Salesforce, and allowing customers to create their own validations around the documents).
Hi HN! We are the DVC team. We are releasing a new tool to make it simpler for ML teams to curate their unstructured data and improve the quality of datasets.
ML teams have a lot of files- texts, images, video, pdfs, etc. Those objects have information (metadata) about them (e.g. labels, embeddings, captions). Last 5-6 years building DVC, we’ve observed a need for data versioning at scale but also a very strong need to store, enrich this metadata and also slice and dice those files based on this metadata (create datasets). We’d seen teams keep building and rebuilding the same infra or glue ETLs and scripts again and again and decided that it’s better to solve this in a more systematic way using our knowledge and experience.
DataChain is a Python library. Think about DataChain as a “data frame” with a “chain” of operations that can be applied to it - filter, merge, map, etc. We don’t store (or require moving or converting) files, rather DataChain is storing references to the origin (paths + version id). It is using a database underneath (SQLite) to preserve results (datasets) and do out of memory computation. It can do parallel computations, data caching, and many more things that make it better suited for unstructured data, ML, and larger scale.
Saved datasets (Data chains) can be passed to a data loader (e.g. Pytorch) to access the original raw files + metadata from the DB.
Please let us know your thoughts and questions in the comments!
Oh yeah. Check the Mission Bay in San Francisco. Everything is walkable and very relevant to my life. I never drive to a grocery store, day care, coffee (3 - 4 places with excellent espresso), kid activities (there a bunch of places - music, soccer field, etc). In fact the first 4-5 years, until we had 2 kids we even didn't own a car - there was no need in SF ... and we enjoyed it a lot.
Now, with my second kid going to school (public) I have to drive (it's literally 1 mile). It has changed the whole schedule. That single drive! Annoys me as hell. It's hard to imagine how convenient it is when you don't have to drive anywhere, even school, day care, etc. This makes a huge difference.
Yup, but that's because, well, the public transit in NYC is just a lot better than almost every other US city (and, let's face it, has a higher proportion of "normal people" riding it).
part of the problem in the US is that the parents, and often times the wider public, give children much less credit than they are do.
children are learning but they are not dumb. They can generally recognize dangerous situations and exercise good judgment, and shocking crimes like stranger abduction are incredibly rare. they can, if services exist, go to the playground and the school unassisted, but unfortunately we now live in a society where even if you trust your kids, someone else might call child services on you.
My partner does this once a week. We have an electric kids scooter. She has to drive the smaller scooter attached to her own since there is no place at school to leave a bike or a scooter. So, works fine, but still quite tedious. Schools apparently are not designed for that, probably they don't want taking care, be responsible, etc, etc.
It can be done, it takes substantially more time with a kid + walking back. My partner is driving my kido using electric scooter - which works fine as well.
To clarify the first part for people who haven't visited yet. About half the neighborhood is some kind of UCSF special district and street parking is reserved. Even at night where I looked. The rest is parking meters and wide streets with needlessly forbidden parking.
These parking meters go wild with what electronic meters allow. The hourly cost changes depending on time of day and events (there are 3 very large event venues that could be served or could abuse this parking area) from perhaps 50 cents (truly cheap for a meter, except no free hours if I remember right) to over 10 dollars (eye watering). And there are no signs showing the current cost or cost schedule - you have to let each meter tell you. And most of these spaces are still full - There is certainly money to be extracted there.
Street parking in Mission Bay is a traumatizing experience :-)
Iterative.ai (Series A) | REMOTE, WORLDWIDE | FULL-TIME | OPEN-SOURCE
Developer tools for ML engineers. We need people who are passionate about building infrastructure that help ML teams manage data (large scale), make sane reproducible workflow, track models. We are building infrastructure to deal with datasets of the 5N LAION scale. Our flagship open source tool- DVC.org (12K+ stars), SaaS product - studio.iterative.ai.
We are looking for senior (SaaS and open source) engineers:
It reminds me an approach to data science and ML. It's way more common and essential for ML teams to log their attempts (experiments). There is a whole set of tools for this - experiments trackers. Primarily to even being able to compare and pick the best direction, but also to ensure reproducibility (in some areas it might be required).
ML/DS always seemed to me closer to science in its nature vs software engineering. Can be because of its nature as well - in a lot of cases it's a process of incremental improvements (vs - simplifying this a lot - let's say in SE we do a button that just works or not).
> ML/DS always seemed to me closer to science in its nature vs software engineering. Can be because of its nature as well - in a lot of cases it's a process of incremental improvements
Right, i have felt this way too (even though i am just a noob at ML/DS). For me it is the use of Statistics/Probability/Mathematics in driving understanding/intuition about the problem.
> Here is what has changed: we no longer trust developers ...
I have a different hypothesis on this (never checked it though). I think as those languages - Python, JS were getting more and more popular we started using them more and more to build and maintain more complex projects. What worked for a few glue-code scripts doesn't work for a Dropbox size and scale project. Thus there was a need to evolve and augment the language.
Since it's developed in UCLA - do they have resources to do the next phases of trials (humans, etc)? I wonder if in such cases pharma (those who obviously have enough money) care about these molecules at all? Since they probably can't put a patent on it and heavily monetize it later. Are there examples of drugs that were developed like this successfully?