A currency type in a database is a data type used to store monetary values, typically including a fixed number of decimal places to represent cents or smaller units, and often includes formatting for currency symbols.
A currency type in a database is a data type used to store monetary values, typically including a fixed number of decimal places to represent cents or smaller units, and often includes formatting for currency symbols.
The concurrency problems a database faces include:
1. **Lost Updates**: When two transactions read the same data and then update it, one update may overwrite the other.
2. **Dirty Reads**: A transaction reads data that has been modified by another transaction that has not yet been committed.
3. **Non-Repeatable Reads**: A transaction reads the same row twice and gets different values because another transaction modified it in between.
4. **Phantom Reads**: A transaction reads a set of rows that match a condition, but another transaction inserts or deletes rows that affect the result set before the first transaction completes.
A heap in a database is implemented as an unordered collection of records stored in a data file. When new records are added, they are placed at the end of the file, and there is no specific order for retrieval. This allows for efficient insertions but may require a full table scan for queries, as there is no indexing.
The different types of joins used in databases are:
1. **INNER JOIN**: Returns records with matching values in both tables.
2. **LEFT JOIN (or LEFT OUTER JOIN)**: Returns all records from the left table and matched records from the right table; if no match, NULLs are returned for the right table.
3. **RIGHT JOIN (or RIGHT OUTER JOIN)**: Returns all records from the right table and matched records from the left table; if no match, NULLs are returned for the left table.
4. **FULL JOIN (or FULL OUTER JOIN)**: Returns all records when there is a match in either left or right table; unmatched records will have NULLs in the columns of the table that does not have a match.
5. **CROSS JOIN**: Returns the Cartesian product of both tables, combining all rows from the first table with all rows from the second table.
6. **SELF JOIN**: A join where a table is joined
New grouping sets allow you to define multiple groupings in a single query, enabling you to generate different aggregate results without needing to write multiple queries.
The advantage of `VARCHAR2` over `CHAR` is that `VARCHAR2` only uses as much storage as needed for the actual string length, while `CHAR` always uses a fixed amount of space, which can lead to wasted storage if the string is shorter than the defined length.
The MERGE statement is used in a database to perform an "upsert" operation, which means it can insert new records or update existing records in a single operation based on a specified condition.
There are five main types of normalization:
1. First Normal Form (1NF)
2. Second Normal Form (2NF)
3. Third Normal Form (3NF)
4. Boyce-Codd Normal Form (BCNF)
5. Fourth Normal Form (4NF)
Normalization is used to reduce data redundancy and improve data integrity in a database.
There are several types of data models, including:
1. **Hierarchical Data Model**: Organizes data in a tree-like structure. Example: An organizational chart.
2. **Network Data Model**: Allows multiple relationships between entities. Example: A transportation network where cities are nodes and routes are connections.
3. **Relational Data Model**: Uses tables to represent data and relationships. Example: A customer database with tables for customers, orders, and products.
4. **Object-oriented Data Model**: Represents data as objects, similar to object-oriented programming. Example: A multimedia database where images and videos are treated as objects.
5. **Entity-Relationship Model (ER Model)**: Uses entities and relationships to represent data. Example: A university database with entities for students, courses, and enrollments.
6. **Document Data Model**: Stores data in document formats, often used in NoSQL databases. Example: JSON documents in a MongoDB database.
7. **Key-
A clustered index determines the physical order of data in a table and there can be only one per table. A non-clustered index is a separate structure that points to the data and can be created multiple times on a table. A unique index ensures that all values in the indexed column are different, and it can be either clustered or non-clustered.
The default key is used to provide a default value for a column in a database table when no value is specified during an insert operation.
Creating a non-clustered index on a table that already has a clustered index will result in the non-clustered index being stored separately from the clustered index. The non-clustered index will contain pointers to the rows in the clustered index, allowing for efficient data retrieval without affecting the structure of the clustered index.
The types of indexes present inside a database include:
1. **Primary Index**
2. **Unique Index**
3. **Composite Index**
4. **Clustered Index**
5. **Non-Clustered Index**
6. **Full-Text Index**
7. **Bitmap Index**
8. **Spatial Index**
You should create an index on a table when you need to improve the performance of queries that frequently search, filter, or sort data in that table.
A super key is a set of one or more attributes that can uniquely identify a record in a database table.
The maximum number of indexes that can be created on a table depends on the database system being used. For example, in MySQL, you can create up to 64 indexes per table, while in SQL Server, the limit is 999 non-clustered indexes. Always refer to the specific database documentation for exact limits.
There are generally two main types of locks in a database: **shared locks** and **exclusive locks**.
Yes, using the direct option as Yes during an insert operation in Netezza Connector can lead to issues such as bypassing certain data validation checks and potentially causing data integrity problems.
A Clipper/DBF error typically indicates a problem with accessing or reading a DBF file, which may be due to file corruption, incorrect file path, or incompatible file format. To resolve it, check the file integrity, ensure the correct path is used, and verify compatibility with the Clipper application.
I'm not able to open Clipper application (app is 15 years old) because i'm getting :
error dbfntx/1012 corruption detected
Anyway to fix corrupted dbf besides restore from last backup?
Please advise.
To reject duplicates in a source sequential file, you can use a filter option in your data processing tool or ETL (Extract, Transform, Load) software. Typically, this filter option is found in the transformation or data cleansing section of the tool, where you can specify conditions to identify and exclude duplicate records based on key fields. If using SQL, you can also use the `DISTINCT` keyword or a `GROUP BY` clause to eliminate duplicates.
Database is the backbone of almost every modern application, enabling structured storage, retrieval, and management of data. Whether it’s a simple website or an enterprise-grade system, databases play a critical role in ensuring data integrity, availability, and scalability. Understanding database fundamentals is crucial for a variety of IT roles including backend developers, data analysts, system architects, and DBAs (Database Administrators).
This category focuses on key database concepts like relational and non-relational databases, data models, normalization, SQL queries, indexing, transactions, and ACID properties. It also covers real-time use of major database systems like MySQL, Oracle, SQL Server, PostgreSQL, and NoSQL solutions like MongoDB and Cassandra.
In interviews, candidates are often tested on query optimization, joins, subqueries, stored procedures, data security, and performance tuning techniques. With the rise of data-driven decision-making, even non-developers are expected to understand basic SQL and database reporting tools.
At Takluu, we provide curated resources, commonly asked interview questions, real-world scenarios, and sample queries to help you prepare better. We simplify complex topics with clear explanations and guide you on how to handle database-related problem-solving questions with confidence.
Whether you’re aiming to work in software development, data science, business intelligence, or IT infrastructure, having a strong foundation in databases gives you a significant edge. Our content is tailored to align with current industry expectations and real interview patterns.
Master the art of storing and managing data — because in today’s tech landscape, data is power, and databases are where that power resides.