The post DigitalOcean CDN Implementation: Enhance Website Speed appeared first on Supunkavinda.
]]>Reasons for Migration:
DigitalOcean emerged as the preferred host for our CDN due to its reliability, user-friendly interface, and seamless integration with our existing hosting infrastructure.
The migration plan was designed for efficiency and minimal downtime. The outlined steps provided a roadmap for a swift transition from AWS to DigitalOcean.
Setting up the New DigitalOcean Server:
While the primary focus has been on the technical aspects of migration, it’s worth highlighting the optimization strategies employed to enhance the CDN’s performance.
Leveraging the capabilities of Cloudflare for caching, we ensure that the cached images are served automatically, alleviating the processing load on our servers.
The speed of data transfer, especially when dealing with substantial files like 10GB images, is a critical factor. The efficiency of the migration process can be influenced by the network speed between the old and new servers. In our case, the process took just a few minutes, but individual experiences may vary based on network conditions.
For those managing live websites, considerations extend beyond the technical setup. The need to change DNS records for a seamless transition introduces an element of live testing and verification. Setting up a temporary domain or subdomain for testing purposes ensures that the application functions correctly on the new server before finalizing the migration.
Explore how Laravel session management tackles high CPU usage challenges.
Maintaining transparent communication with users during the migration is pivotal. Informing users about the upcoming changes, potential downtimes, and the expected benefits of the migration fosters a positive user experience.
Clear communication sets expectations and minimizes any inconvenience caused by temporary service interruptions.
The migration process doesn’t conclude with the DNS changes. Implementing post-migration monitoring is crucial to identify and address any unforeseen issues that may arise in the live environment. Continuous monitoring allows for prompt troubleshooting and ensures the sustained optimal performance of the CDN on DigitalOcean.
DigitalOcean’s flexible infrastructure allows for future scalability. The decision to use volumes for storage ensures that scaling storage capacity doesn’t necessitate a complete server upgrade. This forward-looking approach aligns with the dynamic nature of web services and prepares the CDN for future growth.
Finally, engaging with the community through platforms like comments sections or forums provides an avenue for users to seek clarification, share feedback, and contribute insights. Community engagement fosters a collaborative environment and may uncover valuable suggestions or improvements that enhance the overall CDN experience.
The migration of a CDN involves not only the technical intricacies but also a holistic approach that considers optimization, user engagement, and future scalability. By incorporating these additional insights into the migration process, we aim to empower others undertaking similar endeavors with a comprehensive understanding of the factors at play.
The post DigitalOcean CDN Implementation: Enhance Website Speed appeared first on Supunkavinda.
]]>The post Laravel Session Optimization: Navigating Efficiency appeared first on Supunkavinda.
]]>Our narrative commences with the journey of a Laravel developer transitioning from a legacy PHP application to the more robust Laravel ecosystem. Despite maintaining identical server capacities, an unforeseen spike in CPU usage, soaring to an alarming 100%, sparked apprehension regarding the efficiency of the revamped Laravel application.
As our developer probes deeper, a critical divergence surfaces—the method of session storage. Unlike its predecessor, which stored sessions in a database, the new Laravel application opted for the file system.
This revelation initiates a meticulous examination of the repercussions of varying session storage mechanisms on CPU utilization.
Delving into the Laravel storage directory reveals a multitude of session files. Acknowledging the potential quandary, the developer makes a decisive shift from file-centric storage to a relational database.
This strategic maneuver leads to a substantial reduction in CPU usage, mitigating the initial 100% load to a more manageable 20-30%.
The focal remedy revolves around harnessing the efficacy of relational databases to adeptly manage extensive session data. This section elucidates the process of crafting a dedicated table specifically tailored for storing Laravel sessions, culminating in a performance optimization triumph.
Discover the efficiency gains in DigitalOcean CDN migration for seamless content delivery.
Implementing the database solution demands judicious adjustments to the SESSION_DRIVER configuration. This section, enriched with detailed directives, emerges as a pivotal guide in rectifying the heightened CPU usage predicament.
Navigating the shift towards a database-centric session management system, the article meticulously scrutinizes the discernible impact on CPU usage. The optimized configuration manifests as a tangible reduction, providing valuable insights into the efficiency enhancements achieved.
This segment broadens the scope, meticulously weighing the advantages of employing relational databases over file systems for session storage. Delving into the nuanced disparities in data access, reading, and writing, it equips readers with a comprehensive understanding of the intricate dynamics at play.
While our primary focus remains on database solutions, a fleeting introduction of Redis unfolds—an alternative boasting in-memory key-value storage prowess. Although not directly applied in the discussed case, the acknowledgment of Redis hints at the unexplored avenues for further optimization.
The expedition from identifying rampant CPU usage in a Laravel application to implementing a finely tuned session management solution underscores the indispensable role of storage mechanisms.
The transition from file-centric to database-driven sessions not only resolves immediate concerns but also beckons the exploration of alternative strategies, with Redis emerging as a potential contender.
The post Laravel Session Optimization: Navigating Efficiency appeared first on Supunkavinda.
]]>The post Exploring the Power of Laravel’s ‘Wherein’ Clause appeared first on Supunkavinda.
]]>Before diving into the prepared statements, we assume you have an array of IDs that you want to use in your query. These IDs can represent anything from user IDs, product IDs, or any other relevant identifier.
To efficiently prepare our query, it’s crucial to determine the length of the array of IDs. In PHP, you can obtain this information using the count() function. This step ensures we have the necessary information to create placeholders for our prepared statement.
$ids = [1, 2, 3, 4, 5]; // Your array of IDs
$count = count($ids); // Determine the length of the array
To construct the “WHERE IN” clause, we need to create placeholders in the SQL query for each ID. This is achieved using the implode() function, which joins the placeholders with commas. We use array_fill() to create an array of placeholders, each represented by a question mark (‘?’).
$placeholders = implode(',', array_fill(0, $count, '?'));
Since we are working with an array of integers, it’s crucial to specify the data types for binding parameters in the bind_param() function. We use the str_repeat() function to generate a string representing the data types. In this case, ‘i’ is used for integers, and we repeat it for each ID in the array.
$bindStr = str_repeat('i', $count);
Now that we have our placeholders and bind parameter types ready, we can prepare our SQL statement using MYSQLi. The statement should include the “WHERE IN” clause with the placeholders we generated.
$stmt = $mysqli->prepare("SELECT * FROM table WHERE id IN ($placeholders)");
The final step involves binding the parameters and executing the query. Here, we utilize the splat operator (‘…’) to pass each element of the array as a separate parameter to the bind_param() method.
$stmt->bind_param($bindStr, ...$ids); // Bind parameters
$stmt->execute(); // Execute the query
Read more about the power of PHP with our beginner-friendly PDF tutorial, packed with real-world examples. Start coding today!
By following these steps, you’ll be able to harness the full potential of the “WHERE IN” clause in MYSQLi prepared statements with PHP. This approach not only enhances the efficiency of your database queries but also strengthens the security of your applications by preventing SQL injection attacks. Make sure to adapt these techniques to your specific use cases, and you’ll be well-equipped to handle complex queries and optimize your PHP projects.
The post Exploring the Power of Laravel’s ‘Wherein’ Clause appeared first on Supunkavinda.
]]>The post How IT infrastructure is audited and why to do it appeared first on Supunkavinda.
]]>As a result of the audit, the client receives an objective assessment of the state of its IT infrastructure and recommendations for improving its quantitative and qualitative indicators. Obtaining an objective assessment is the main goal of the audit. In the process of achieving this goal, various components of the system are evaluated according to numerous criteria.
It should be understood that the concept of “IT infrastructure audit” is quite extensive, as it includes audits of:
Each type of audit requires separate preparation and different ways of conducting audits. But ideologically, all of them are aimed at finding problems in one or another part of the IT infrastructure. Summarizing the criteria by which these audits are conducted, the following points can be distinguished:
Audit preparation
The post How IT infrastructure is audited and why to do it appeared first on Supunkavinda.
]]>The post Virtualization and cloud computing appeared first on Supunkavinda.
]]>Virtualization separates computing resources from hardware and allows for the creation of multiple virtual environments on a single physical IT infrastructure. Initially, the technology referred to server virtualization, when a single physical server hosted several virtual servers. Now virtualization is used to abstract not only servers, but also workstations, applications, storage and even network infrastructure.
A hypervisor, a software or hardware device that abstracts and manages the computing resources of the physical infrastructure, is responsible for virtualization.
Virtualization platforms, such as Microsoft Hyper-V, VMWare vSpher, create not just computing resources, but full-fledged virtual data centers with their own infrastructure and services abstracted from the physical IT infrastructure.
The simplest examples of virtualized resources are a dedicated virtual server (VPS/VDS) and virtual desktops (VDI).
A virtual server (VPS/VDS) is an isolated virtual analog of a physical server with specified resource limits and its own operating system. Unlike physical servers, VPS/VDS are quickly created and easily transferred to different platforms, and when they are no longer needed, they are also quickly destroyed.
Virtual desktops (VDI) are workstations with a specific set of programs and applications. All corporate data is stored on a secure remote server, and employees access it from their own computers. VDI allows a single IT engineer to remotely manage thousands of virtual desktops, even if employees are spread across dozens of branches of the same organization.
Cloud computing is action, and clouds are the virtual environments in which applications run. By connecting to clouds, users remotely receive virtualized services and computing resources according to their current needs.
Services in the cloud are provided through different models and can include both infrastructure rental and software operation. For example, it can be renting a virtual server or connecting to the cloud version of 1C.
The provider separates computing resources from hardware – servers and storage – and provides them to its clients. Each customer receives isolated virtualized infrastructure such as servers, storage, and virtual machines. The provider ensures that the physical hardware is up and running, and the client maintains the virtualized infrastructure on its own, customizing it and installing the necessary software. The advantage of IaaS is that organizations do not need to purchase equipment – if the load increases, the provider provides additional resources, if it decreases, there is no need to pay for unused capacity.
PaaS implies the provision of a greater range of services, compared to IaaS. Under this model, clients receive a virtual infrastructure with software already customized for specific tasks. Customization and configuration of the platform is undertaken by the provider, and the client has access to management. The advantage of PaaS is that the client receives a ready-to-work platform and does not spend its resources on its support.
Software as a Service is a completely ready-to-use solution. SaaS includes a huge number of software from email services to CRM. The advantage of SaaS is that customers get a ready-to-use service with certain unchangeable settings. The provider takes care of licensing, timely software updates and technical support.
The post Virtualization and cloud computing appeared first on Supunkavinda.
]]>The post MYSQLI Statements for PHP Database Security appeared first on Supunkavinda.
]]>Safeguarding sensitive user data is crucial for PHP forms security, and one key aspect to consider is implementing MYSQLi statements for PHP database security.
MYSQL is a widely-used relational database system, and MYSQLI stands as a robust PHP extension for seamless interaction with MYSQL databases. Prepared statements are queries that are precompiled and executed later, with the inclusion of data.
In a nutshell, prepared statements serve as a shield for websites against the nefarious SQL Injection attacks that can compromise their security. Additionally, prepared statements can offer improved performance compared to conventional queries, as cited by various sources. From my own experience, for straightforward queries, their speed might be comparable, but for recurring queries, prepared statements shine with remarkable efficiency. Another noteworthy advantage is their superior readability, making them easy to comprehend and manage.
Before diving into the world of prepared statements in PHP, there are a few essential prerequisites you must have in place:
<?php
$mysqli = new mysqli( 'hostname', 'username', 'password', 'database' );
In this tutorial, we will be working with a “user” table in our database, structured as follows:
id name email
1 Teodor teod@gmail.com
2 Christ christoperkhawand@gmail.com
3 Austin austin@gmail.com
4 Ayush ayushagarwal@gmail.com
Now, let’s proceed to explore how to use prepared statements in PHP.
Let’s explore prepared statements in PHP with MYSQLI, focusing on fundamental concepts and techniques for executing various query types, including SELECT and UPDATE.
To begin, we’ll outline the basic steps:
$stmt = $mysqli -> prepare(‘SELECT * FROM users WHERE id = ?’);
$userId = 2;
$stmt -> bind_param('i', $userId);
In the bind_param() method, the first parameter specifies the data types of the variables being bound. If you had multiple variables with different data types to bind, you could use a string like ‘iisi’ (integer, integer, string, integer).
It’s important to note that binding a literal value directly, as shown below, is not valid in PHP. The arguments for the bind_param function should be variables, with the exception of the first one:
$stmt -> bind_param('i', 2);
Next, we execute the query
$stmt -> execute();
$stmt->execute();
The subsequent steps in the process will vary depending on the type of query you intend to perform. In the following sections, we’ll explore examples and techniques for different types of MYSQLI prepared statements in PHP.
$stmt = $mysqli -> prepare('SELECT name, email FROM users WHERE id = ?');
$userId = 1; // or $_GET['userId'];
$stmt -> bind_param('i', $userId);
$stmt -> execute();
$stmt -> store_result();
$stmt -> bind_result($name, $email);
$stmt -> fetch();
echo $name; // Teodor
echo $email; // teod@gmail.com
echo $email; // teod@gmail.com
Initially, this might seem challenging, especially for beginners. However, as you progress through the subsequent steps, you’ll grasp the concept more clearly. Keep in mind that the fetch() function stores the result of the current row into the variables specified in bind_result(). By default, the initial current row is the first one in the result set. When we invoke fetch() once, the current row becomes the second one in the results. However, it’s worth noting that in this particular query, we’re dealing with only a single row.
$stmt = $mysqli -> prepare('SELECT name, email FROM users');
$stmt -> execute();
$stmt -> store_result();
$stmt -> bind_result($name, $email);
while ($stmt -> fetch()) {
echo $name;
echo $email;
}
The bind_param() function is unnecessary when no variables need to be passed. The following code retrieves all users’ data and displays their names and emails.
The fetch() function returns true when it succeeds and false when it fails or when no rows are found. Therefore, we can directly employ it as the condition for the while loop.
Each time fetch() is invoked, the data from the current row is stored in the $name and $email variables, and the cursor advances to the next row. So, when fetch is called again, it retrieves the data from the subsequent row.
$stmt = $mysqli -> prepare('SELECT name, email FROM users');
$stmt -> execute();
$stmt -> store_result();
// 4
echo $stmt -> num_rows;
Keep in mind that you should call store_result() before accessing the num_rows property.
$stmt = $mysqli -> prepare('SELECT name, email FROM users WHERE id > ?');
$greaterThan = 1;
$stmt -> bind_param('i', $greaterThan);
$stmt -> execute();
$result = $stmt -> get_result();
Now, the $result variable is equivalent to using $mysqli->query(…). You can employ it as shown below to work with the results.
while ($row = $result -> fetch_assoc()) {
echo $row['name'];
echo $row['email'];
}
In the realm of MYSQL, wildcards play a pivotal role in pattern matching. They are instrumental in searching for specific patterns within your data.
<?php
$stmt = $mysqli -> prepare('SELECT name, email FROM users WHERE name LIKE ?');
$like = 'a%';
$stmt -> bind_param('s', $like);
$stmt -> execute();
$stmt -> store_result();
$stmt -> bind_result($name, $email);
while ($stmt -> fetch()) {
echo $name;
echo $email;
}
In this instance, we will retrieve all users whose names commence with the letter “a,” which includes “Austin” and “Ayush.”
When working with prepared statements, retrieving data based on an array of IDs can be a challenging task. It involves dynamically incorporating question marks into the query to accommodate the varying number of IDs in the array.
// array of user IDs
$userIdArray = [1,2,3,4];
// number of question marks
$questionMarksCount = count($userIdArray);
// create a array with question marks
$questionMarks = array_fill(0, $questionMarksCount, '?');
// join them with ,
$questionMarks = implode(',', $questionMarks);
// data types for bind param
$dataTypes = str_repeat('i', $questionMarksCount);
$stmt = $mysqli -> prepare("SELECT name, email FROM users WHERE id IN ($questionMarks)");
$stmt -> bind_param($dataTypes, ...$userIdArray);
$stmt -> execute();
$stmt -> store_result();
$stmt -> bind_result($name, $email);
while ($stmt -> fetch()) {
echo $name;
echo $email;
}
$stmt = $mysqli -> prepare("SELECT name, email FROM users LIMIT ? OFFSET ?");
// limit of rows
$limit = 2;
// skip n rows
$offset = 1;
$stmt -> bind_param('ii', $limit, $offset);
$stmt -> execute();
$stmt -> store_result();
$stmt -> bind_result($name, $email);
while ($stmt -> fetch()) {
echo $name;
echo $email;
}
In this example, we employ the BETWEEN clause in a prepared SELECT statement to retrieve data from the “users” table. We specify a range of IDs between $betweenStart and $betweenEnd.
$stmt = $mysqli -> prepare("SELECT name, email FROM users WHERE id BETWEEN ? AND ?");
$betweenStart = 2;
$betweenEnd = 4;
$stmt -> bind_param('ii', $betweenStart, $betweenEnd);
$stmt -> execute();
$stmt -> store_result();
$stmt -> bind_result($name, $email);
while ($stmt -> fetch()) {
echo $name;
echo $email;
}
In this example, we demonstrate how to insert a single row of data into the “users” table using prepared statements.
$stmt = $mysqli -> prepare('INSERT INTO users (name, email) VALUES (?,?)');
$name = 'Akhil';
$email = 'akhilkumar@gmail.com';
$stmt -> bind_param('ss', $name, $email);
$stmt -> execute();
In scenarios where you have an auto-incremental column to store user IDs, it’s often crucial to determine the ID of the user you’ve just added to the database. You can achieve this by utilizing the $stmt->insert_id property.
Here’s an example:
$stmt = $mysqli -> prepare('INSERT INTO users (name, email) VALUES (?,?)');
$name = 'Akhil';
$email = 'akhilkumar@gmail.com';
$stmt -> bind_param('ss', $name, $email);
$stmt -> execute();
echo 'Your account id is ' . $stmt -> insert_id;
Performing recursive insertions with prepared statements is a robust technique for adding multiple rows of data to a database efficiently. With this approach, you prepare a single statement and utilize it to insert multiple rows in a streamlined manner.
Here’s an example illustrating this technique:
$newUsers = [
[ 'sulliops', 'sulliops@gmail.com' ],
[ 'infinity', 'infinity@gmail.com' ],
[ 'aivarasco', 'aivarasco@gmail.com' ]
];
$stmt = $mysqli -> prepare('INSERT INTO users (name, email) VALUES (?,?)');
foreach ($newUsers as $user) {
$name = $user[0];
$email = $user[1];
$stmt -> bind_param('ss', $name, $email);
$stmt -> execute();
echo "{$name}'s account id is {$stmt -> insert_id}";
}
As you observe, with each iteration of the loop, the $stmt->insert_id property is updated to reflect the ID of the newly inserted row, making it straightforward to track the IDs for all the added records.
$stmt = $mysqli -> prepare('UPDATE users SET email = ? WHERE id = ? LIMIT 1');
$email = 'newemail@hyvor.com';
$id = 2;
$stmt -> bind_param('si', $email, $id);
$stmt -> execute();
Sometimes you will need to know how many rows are affected by our UPDATE query.
$stmt = $mysqli -> prepare('UPDATE users SET email = ? WHERE name = ? LIMIT 1');
$email = 'newemail@hyvor.com';
$name = 'teodor';
$stmt -> bind_param('ss', $email, $name);
$stmt -> execute();
// 1
echo $stmt -> affected_rows;
$stmt = $mysqli -> prepare('DELETE FROM users WHERE id = ?');
$userId = 4;
$stmt -> bind_param('i', $userId);
$stmt -> execute();
// number of deleted rows
echo $stmt -> affected_rows;
Dealing with errors in MYSQLI prepared statements is a crucial aspect of database interaction. Here are some valuable tips for debugging and handling errors:
Sometimes, the $mysqli->prepare() function fails due to an incorrect query. You can identify this by checking if $stmt is a boolean (false) instead of an object.
How to Identify Preparation Failures:
$stmt = $mysqli -> prepare('SELECT * FROM no_table WHERE id = ?');
$id = 1;
$stmt -> bind_param('i', $id);
If you encounter a PHP error message such as “Call to a member function bind_param() on boolean” while attempting to use methods on the $stmt variable, it indicates a failure in the preparation of the statement. In the event of a preparation failure, $mysqli->prepare() returns false, making $stmt a boolean value rather than an object. You can leverage $mysqli->error to pinpoint the specific error within your query.
$stmt = $mysqli -> prepare('SELECT * FROM no_table WHERE id = ?');
echo $mysqli -> error;
Execution failures typically do not trigger error messages. Consequently, it is essential to incorporate a conditional check to verify the success of the execution. If the execution was not successful, you can rely on $stmt->error to reveal the nature of the error.
Here’s an example:
$stmt = $mysqli -> prepare('INSERT INTO stmt_users (name) VALUES (?)');
$name = 'User';
$stmt -> bind_param('i', $name);
if (! $stmt -> execute()) {
echo $stmt -> error;
}
In the context of our sample table, the encountered error message is “Field ’email’ does not have a default value.”
The objective of this tutorial was to comprehensively explore the various methods of implementing prepared statements. We delved into their usage in SELECT, INSERT, UPDATE, and DELETE operations. It is my aspiration that this article has provided a thorough understanding of MYSQLI prepared statements for those seeking to enhance their knowledge in this area.
The post MYSQLI Statements for PHP Database Security appeared first on Supunkavinda.
]]>The post Machine learning methods appeared first on Supunkavinda.
]]>Machine learning is a branch of artificial intelligence (AI) that studies methods and algorithms that allow computer systems to automatically learn from data and make predictions or decisions without an explicit programming task. Unlike traditional programming, where the developer explicitly specifies the instructions used by the system, in machine learning, the model is trained based on the data provided, and the results of the training become the basis for further decisions.
There are several key concepts in machine learning that need to be understood.
Learning with a teacher
Learning with a teacher is the process of training a model on labeled data, where each example has a corresponding label – the desired output of the model. The goal of the model is to find patterns in the data to predict labels for new, unknown examples.
In the field of teacher-guided learning, there is a wide range of methods and algorithms to solve different problems.
Support Vector Method (SVM). SVM is a powerful algorithm for classification and regression tasks. It constructs a hyperplane that separates examples of different classes with the largest gap.
Decision trees and random forest. Decision trees are a tree structure of solutions where each node contains a condition on one of the features of the data. A random forest is an ensemble of decision trees. They are widely used for classification and regression.
Neural Networks. A model based on the workings of the human brain. Neural networks consist of artificial neurons and the connections between them. They have been successfully used in a variety of applications including computer vision, natural language processing, and speech recognition.
Learning without a teacher
Teacherless learning is a branch of machine learning in which models analyze data and find hidden structures in it without pre-labeled labels. This approach can automatically extract information from large amounts of data, making it particularly useful when dealing with unstructured data such as images or audio recordings.
The post Machine learning methods appeared first on Supunkavinda.
]]>