GitHub – elder-plinius/OBLITERATUS: OBLITERATE THE CHAINS THAT BIND YOU
OBLITERATUS is a groundbreaking open-source toolkit designed for analyzing and mitigating refusal behaviors in large language models. The project provides a comprehensive, pipeline-driven approach – from pinpointing refusal ‘directions’ within a model’s hidden states to directly intervening at inference time, without retraining. The project facilitates a collaborative research experiment, collecting anonymous benchmarking data that drives advancements in abliteration techniques. A user-friendly Gradio interface on Hugging Face Spaces enables easy experimentation, while a Python API offers granular control for more advanced users.
Tag: algorithm
Dictionary of Algorithms and Data Structures
This is a dictionary of algorithms, algorithmic techniques, data structures, archetypal problems, and related definitions. Algorithms include common functions, such as Ackermann’s function. Problems include traveling salesman and Byzantine generals.
In the words of the great classic, “dort ist alles”
Home – Welcome to MLBox’s official documentation — MLBox Documentation
MLBox is a powerful Automated Machine Learning python library. It provides the following features:
Fast reading and distributed data preprocessing/cleaning/formatting.
Highly robust feature selection and leak detection.
Accurate hyper-parameter optimization in high-dimensional space.
State-of-the art predictive models for classification and regression (Deep Learning, Stacking, LightGBM,…).
Prediction with models interpretation.
ryanjay0/miles-deep: Deep Learning Porn Video Classifier/Editor with Caffe
Using a deep convolutional neural network with residual connections, Miles Deep quickly classifies each second of a pornographic video into 6 categories based on sexual act with 95% accuracy. Then it uses that classification to automatically edit the video. It can remove all the scenes not containing sexual contact, or edit out just a specific act.
XGBoost-Node is the first port of XGBoost to run existing XGBoost model with Node.js.
XGBoost is a library from DMLC. It is designed and optimized for boosted trees. The underlying algorithm of XGBoost is an extension of the classic gbm algorithm. With multi-threads and regularization, XGBoost is able to utilize more computational power and get a more accurate prediction.
Use full GM set of musical instruments to play MIDI and single sounds or effects. Support for reverberation and equaliser. No plugins, no Flash. Pure HTML5 implementation compatible with desktop and mobile browser.
mljs/random-forest: Random forest for classification and regression.
Random forest for classification and regression.
A tutorial on how to implement an algorithm for predictive maintenance using survival analysis theory and gated Recurrent Neural Networks in Keras.
retext: Natural language processor powered by plugins based on @unifiedjs (and @vfile, @syntax-tree)
retext is an ecosystem of plug-ins for processing natural language
GitHub – commonsense/conceptnet-numberbatch
ConceptNet Numberbatch consists of state-of-the-art semantic vectors (also known as word embeddings) that can be used directly as a representation of word meanings or as a starting point for further machine learning.
PaddlePaddle – PArallel Distributed Deep LEarning
PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.
PaddlePaddle – PArallel Distributed Deep LEarning
PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.
EYE is an inference engine supporting logic based proofs. It is a backward-forward-backward chaining reasoner enhanced with Euler path detection.
The backward-forward-backward chaining is realized via an underlying Prolog backward chaining, a forward meta-level reasoning and a backward proof construction.
Algorithms are a fascinating use case for visualization. To visualize an algorithm, we don’t merely fit data to a chart; there is no primary dataset. Instead there are logical rules that describe behavior. This may be why algorithm visualizations are so unusual, as designers experiment with novel forms to better communicate. This is reason enough to study them.
easystar.js is an asynchronous A* pathfinding API written in Javascript for use in your HTML5 games. The aim of the project is to make it easy and fast to implement performance conscious pathfinding into your project.
Levenshtein Distance and the Triangle Inequality « Inviting Epiphany
The first and most important thing about Levenshtein distance is it’s actually a metric distance. That is, it obeys the triangle inequality. For most other string distance measurements this property doesn’t hold.
To honor Alan Turing, we built a simple LEGO Turing Machine, to show everyone how simple a computer actually is. Primary goals were to make every operation as visible as possible and to make it using the automatic components of just a single LEGO MINDSTORMS NXT set, to make it easy to reproduce for those interested.
Restoration of defocused and blurred images. Yuzhikov.com
Why is there almost no means for correction of blurring and defocusing (except unsharp mask) – maybe it is impossible to do this at all? In fact, it is possible – development of a respective mathematical theory started approximately 70 years ago, but like other algorithms of image processing, deblurring algorithms became wide-used just recently.
AgreementMaker | Efficient Matching for Large Real-World Schemas and Ontologies
AgreementMaker aims at being a user friendly, powerful, and flexible ontology and schema matching system.
Laurence Tratt: Parsing: The Solved Problem That Isn’t
After the creation of programming languages themselves, parsing was one of the first major areas tackled by theoretical computer science and, in many peoples eyes, one of its greatest successes. The 1960s saw a concerted effort to uncover good theories and algorithms for parsing. Parsing in the early days seems to have shot off in many directions before, largely, converging. Context Free Grammars (CFGs) eventually “won”, because they are fairly expressive and easy to reason about, both for practitioners and theorists.