Join the other 189,023 FreeNAS Newsletter Subscribers Hey you! Yeah, you on the other side of the screen. We see you’re interested in FreeNAS, but what are you going to do after you’ve downloaded FreeNAS? That’s where we come in. The FreeNAS Newsletter delivers the best of FreeNAS and storage news straight to your inbox each month. Here’s why you should sign up:.
Get the most out of FreeNAS with tried and true builds and tutorials from users like you. Keep your system secure with once-a-month updates about the latest release. Be alerted the moment we discover critical issues you need to be aware of. Exclusive for newsletter subscribers: instead of pouring over hundreds of pages of guides and tutorials, take the easy route and read our 4-page official FreeNAS Hardware Requirements and Recommendations guide. Sign up to get your copy!
Emergency If you need this how-to, your best option is to use the Lynx web browser in a terminal session on the server. It is text-mode only. Obviously, you will first copy the backup on the server file-system.
Terminal based Often, you don’t need to restore images or attachments. They are recorded in the server filesystem. In this case, you can remove the filestore/ part from the backup file and use the one already on your server. It is the heavier part of the backup (see below). Assuming the database name is www and the owner is odoo, proceed as below (this way you can reuse your filestore/ folder): $ sudo su postgres $ dropdb www $ createdb -O odoo www $ psql www. Restoring database from Terminal Understanding the backup file The Odoo 10 backup utility does a classical Postgresql dump and agregate metadatas and static files. Unzip the backup file, you will get:.
filestore/ static files stored on disk, like images, attachments,. dump.sql Postgresql backup — your datas. manifest.json contains list of modules, Odoo version, db name, Postgresql version, etc. As a direct result of Postgresgl command, the dump.sql part can be restored with a simple Postgresql command, and the filestore/ part copied to the server (usually in /var/lib/odoo).
Replacing the current database In case you want to restore with the same database name, you will first delete your database before recreate it. This way you can reuse your filestore/ folder: $ dropdb databasename Restoring dump file Before restoring you have to create a new database (y ou need to create the database using the same Postgresql user of Odoo or you won’t be able to use it): $ createdb newdatabasename Or pass the Postgresql user name: $ createdb -O dbusername newdatabasename Unzip the backup, filestore/ files goes to: /var/lib/odoo/filestore/newdatabasename and dump.sql will be restored with: $ psql newdatabasename. In Odoo, inventories are managed through Inventory / Inventory Control / Inventory Adjustments. Importing your stock, means creating an inventory adjustment. You will import your first stock (initial inventory), using exactly the same procedure as later stock’s imports.
Minimal prerequisites. At least one warehouse created (should be done automaticaly). Products and variants already imported First obvious option is to create a new Inventory Adjustment, checking the « All products » option.
Then press Start Inventory button: products will be populated. You can modify stock inline in Real Quantity column. But with lots of products, you will prefer to do that in a spreadsheet:.
keep your first Inventory Adjustment in draft mode,. go back to Inventory Adjustments, select your inventory, export it (select the required fields as the below list),. update your spreadsheet and import it back.
This is a how to install Nextcloud 12.3 with all checks passed on FreeNAS 11. This is a rewrite of the original post with a few added adjustments where needed to make this fully functional. I know this works as I have re-emulated this 4 times today to insure its accuracy Somethings may need to be edited based upon volume name but for the most part plug and play a great thanks to and all of those that have contributed. Creating the Dataset & Jail Create Dataset Within FreeNAS Userspace: Storage Create ZFS Dataset. Code: Source = /mnt/Cloud/db Destination = /var/db/mysql Setting primary cache In FreeNAS UserSpace Shell $ zfs set primarycache=metadata Cloud/db F.A.M.P Installation In this section we are going to install F.A.M.P, an iteration of LAMP (Linux, Apache, MySQL, PHP). I chose this because I, personally, haven't had much luck with nginx or lighttp. Another guide suggested lighttp and sqlite, but those might not hold up to a good amount of users storing a bit of data.
The setup is: FreeBSD 11.0 Apache 2.4 MariaDB 10.1 PHP 7.0 or 7.1.Do NOT install PHP7.2 - Nexcloud is absolutely incompatible with this version as of this writing. This provides the basis for our webserving jail. Via putty ssh into the jail From FreeNAS user space run command: JLS you will then see your jails, the run commander jexec 'And the number of you jail' (example) 'jexec 2' Before we get started, let's add a few necessary packages as they aren't currently installed. $ portsnap fetch extract $ pkg install nano wget sudo We will install each part of FAMP one-by-one. FreeBSD is the Operating system so good to go on that!
Install Apache 2.4 $ pkg install apache24 Setup in rc.conf $ sysrc apache24enable=yes Start Apache $ service apache24 start Okay lets Check to see that it works!!!! Open a web browser on a local machine (preferably the machine your on) on your network Navigate to and you should see the text 'It Works!'
Install MariaDB 10.1 $ pkg install mariadb101-server Setup in rc.conf $ sysrc mysqlenable=yes Start MySQL Service $ service mysql-server start Run Wizard Script $ mysqlsecureinstallation For this step read and follow prompts. By default there is no root password, ( you must create a new one when promted just hit enter, and answer Y to all the following questions. Login to MySQL, create Nextcloud DB and User $ mysql -u root -p Enter the password you made for root during MariaDB 10.1 setup. Enter each of these commands one-by-one, and make sure to include the semi-colon. Code: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message Syntax OK Stopping apache24.
Jul 9, 2018 - download tamilrockers oru kuppai kathai full movie free download oru. The Boxtrolls 2014 BluRay 650Mb Hindi Dubbed Dual Audio 720p. Feb 4, 2018 - Battleship Full Movie In Tamil Full. Battleship Full Movie In Tamil HD Blu-ray movie free download. 22 Minutes 2014 Tamil Dubbed Movie. Live Free or Die Hard 4 (2007) 720p BDRip Multi Audio Telugu Dubbed Movie. Dragon Hunters (2008) 720p BluRay Dual Audio Telugu Dubbed Animation. American fantasy drama Telugu Dubbed Tamil Movie Download HD Video.
Waiting for PIDS: 80591. Performing sanity check on apache24 configuration: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message Syntax OK Starting apache24.
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message when stating apache do the following: Run command: nano /usr/local/etc/apache24/httpd.conf Search for ' servername' it will look like the below, enter your jails ipaddress xxx.xxx.x.xxx:80.
Code: # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. ServerName your jails IP:80. Code:. Your data directory and your files are probably accessible from the Internet. The.htaccess file is not working.
It is strongly recommended that you configure your web server in a way that the data directory is no longer accessible or you move the data directory outside the web server document root. You are accessing this site via HTTP. We strongly suggest you configure your server to require using HTTPS instead as described in our security tips. Have no fear if this error bothers you their is a fix Force to https redirect: Now lets introduce a redirect as in a redirect of http to https this mainly for on the lan as when accessed from the world wide web it automatically redirects to https and is not accessible via http.
Be advised the https will say the connection is not secure when accessing from the lan using the direct IP. Edit the.htaccess file by entering the following command: $ nano /usr/local/www/apache24/data/nextcloud/.htaccess While in the.htaccess file directly below this text.
Hi, great tutorial! Could be awesome a video. I am having a problem with the step ' Lets Cache!!!'
When I want to restart apache: 'Performing sanity check on apache24 configuration: AH00526: Syntax error on line 26 of /usr/local/etc/apache24/Includes/juanperez.zapto.org.conf: SSLCertificateFile: file '/usr/local/etc/letsencrypt/live/juanperez.zapto.org/fullchain.pem' does not exist or is empty Also when I'm trying to go to '/usr/local/etc/letsencrypt/', error: No such file or directory. I don't know what to do. Also, when I comment that lines with # I have this other issue: service apache24 restart Performing sanity check on apache24 configuration: AH00526: Syntax error on line 31 of /usr/local/etc/apache24/Includes/juanperez.zapto.org.conf: SSLCipherSuite takes one argument, Colon-delimited list of permitted SSL Ciphers ('XXX.:XXX' - see manual). New to all this. I want to say thank you for all the work and time you've spent to help people like me that are baby's to all this. I need all of your help to figure out where or what I've done wrong here. I got as far as the section: Restart Apache: $ service apache24 restart Navigate to the website jails IP/ You should now see the setup screen for NextCloud!!
Problem is, I don't get the Nextcloud screen I just get the initial 'It works!' I tried to back step to see where i may have gone wrong but no luck. Thank you in advance. New to all this. I want to say thank you for all the work and time you've spent to help people like me that are baby's to all this. I need all of your help to figure out where or what I've done wrong here. I got as far as the section: Restart Apache: $ service apache24 restart Navigate to the website jails IP/ You should now see the setup screen for NextCloud!!
Problem is, I don't get the Nextcloud screen I just get the initial 'It works!' I tried to back step to see where i may have gone wrong but no luck. Thank you in advance. Code: Obtaining a new certificate Performing the following challenges: http-01 challenge for MYDOMAIN Using the webroot path /usr/local/www/apache24/data/nextcloud for all unmatched domains.
Waiting for verification. Cleaning up challenges Failed authorization procedure. MYDOMAIN (http-01): urn:acme:error:connection:: The server could not connect to the client to verify the domain:: Fetching Timeout IMPORTANT NOTES: - The following errors were reported by the server: Domain: MYDOMAIN Type: connection Detail: Fetching Timeout To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address. Additionally, please check that your computer has a publicly routable IP address and that no firewalls are preventing the server from communicating with the client. If you're using the webroot plugin, you should also verify that you are serving files from the webroot path you provided. Your account credentials have been saved in your Certbot configuration directory at /usr/local/etc/letsencrypt.
Mac Os Sierra 10.12 Full Install
You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. Root@nextcloud1: #.
Freenas Full Install Xavier
Hello Spiceworks Community, I have a potential client that wants me to build a 'server' that can be used to store 130-150TB of raw video footage. However, the client would like to have a RAID configuration so that there is a full redundant copy of that data in the event of data corruption and disk failure (so really 260-300TB).
The client also wants to be able to edit that footage directly from the 'server' from one - two video editing stations. Don't believe an actual 'server' is necessary, but a high-end NAS/Disk Array is really what is needed. Would love to hear recommendations from anyone who has either built something similar or can provide any valuable information to consider while I'm currently working on drafting up possible tiered options (ideal/best, good, decent) for my potential client (Operating System to use, what types of drives to use, controllers to use, case, NICs, etc).
Any ideas, and advice are eagerly welcome. Thank you so much in advance. Any kind of budget, form factor (rackmount, tower) or any other consideration? At minimum I would be looking at something like ZFS on Linux running on whatever distro is preferred, setup with probably:. 2-3 RAID-Z2/RAID-Z3 vdevs to make up the primary storage with an identical setup as a mirror, either on the same machine (via external DAS/Expanders) or a separate machine with the same setup. RAID controller would tie into this; whether you're going for a software-based solution or hardware one.
Depending on the video and editor requirements (whether they want real-time scrubbing or something) then probably have a large cache or tiered storage for active projects via some SAS/SATA/PCIe SSD's. Networking would depend on the clients but I would probably go for dual 10GbE anyways. I would expect a budget of at least $15-20k+ just based on the hard drives alone. Xavierperry wrote: Thanks for your input, DBeato. The budget is relatively high and somewhat flexible.
Preferably $15k or below, but may go up to $20k. The editing stations will be running Windows 10. Since the footage will be worked on directly from the storage, the assumption is that it will be worked on daily. This system is for working on a documentary project. Connectivity will be over LAN.
Hope that helps.Using 10TB drives you would need at least 36 drives to get at 150TB+mirror/backup with RAID-6/Z2. At $400 p/drive for the cheapest consumer SATA versions you're looking at $15k+taxes. SAS/Enterprise SATA drives are $500 p/drive.
You're already exceeded your max budget before we even start talking about spares, ssd's for cache, servers, chassis, software and licenses. You should expect the final price to be closer to $20k+. Hi Xavier I would recommend you choose storage device that will have raid to ensure redundancy in the hardware, if drive fails you still have access to your data. However the data kept is not redundant, for this I would recommend you use Arcserve to send copies either to Tape, secondary Disk or Cloud. Raid on HW would allow for uptime on storage device, if Raid was to fail, backup copy would be available to allow for no data loss. I would grab a free demo copy of our software to test Deduplication and compression you might get so that you can predict data reduction ratios before purchasing 2nd Media eg, tape, disk, cloud Download UDP: Live on-line events for Arcserve UDP: All timezones: Arcserve UDP Live Demo: Every Friday, 10:00 GMT: Register: Arcserve High Availability Live Webcast: Every Tuesday, 10:00 BST: Register: Unified Data Protection for Virtual & Physical Servers: Every Thursday, 12:00 CEST: Register. You can build something to store that much data, but how much performance do you need?
Do you need any of this data backed up or is this a staging area? Seems like this either needs to be a staging area (which means you want RAID0 or SSD and no backup) or you need backup storage in addition to 150TB usable. If people are using this over the network you need to get an idea for the traffic and figure out if 1Gbit is enough. Then again with only a $15-20k budget 10Gbit switching is out of the question.
Michael.SC wrote: At minimum I would be looking at something like ZFS on Linux running on whatever distro is preferred, setup with probably:. 2-3 RAID-Z2/RAID-Z3 vdevs to make up the primary storage with an identical setup as a mirror, either on the same machine (via external DAS/Expanders) or a separate machine with the same setup. RAID controller would tie into this; whether you're going for a software-based solution or hardware one You don't (99% of the time) use RAID controllers w/ ZFS, other than flashing them to Direct/IT/JBOD mode. If you want the benefits of ZFS, you don't want to hide/obfuscate/abstract anything to it you don't have to. Also, I wouldn't jump straight to RAIDZ for primary storage, for multiple reasons.
I did work for such an outfit some years ago, they had absolutely no regard for money and always wanted the best money could buy. I did suggest several places where they could save money, but they where not interrested in the slightest And most hardware was already bought anyway. They are bancrupt now of course. Anyway, they set up a SAN (especially made for viedoediting (doh) adding another 25-30% to the price no doubt) and mounted drives from it across fibre channel. Crpf salary slip pay slip download. Ludicrus money black hole, and with a performance no better than any other server.
And ofcourse raid 5 since 'the controllers are redundant'. Made absolutely no sense, but they had hired an 'expert' so I backed out quietly and found another job.
So the 45drives as has been mentioned sounds like a good choice, but I think (if that other place was anything to go by) you should select a 2channel 10Gb NIC, either fibre or copper depending on the rest of their setup (how far from the actual workstation will this server be?) And 10Gb NIC in your workstations. The editing suite they used could use normal SMB shares no problem, and worked smoth over a 1Gb connection (we tested it) But I don't know enough to say it would be good enough now with 4K and 8K editing. And please, stay away from raid 5 as the easy way out to get more storage. For this you will need massive drives, and the added speed from RAID10 wount hurt you.
I wouldn't even deliver a plan for this as a consultant for less than $15k, let alone provide the hardware. You're thinking pro system on a home budget. To get realistic, you'll have to get more operational specifics. Full-time random access to all 150TB? Not happening.
Fast SSD-based storage for the working copy of data being edited. Moved to on-line, but slower storage after work. Other material stored there brought into the fast SSD area for editing. Move it all to archive storage with Veeam backup (for example). You can have all your data, fast access, or security.
But you can't have it all for that budget. Good feedback from everyone so far. I really appreciate the help.
@DBeato - Looked at 45drives.com. Really great solution. @Corbin - Great video, thank you. @toby wells - The client wll be using the Adobe suite for editing, and probably DaVinci Resolve as well.
Not sure what else at this point. @snorble - I also believe 10 gigabit switching will be necesary. @Breffni Potter - Thank you for the advice. @Rune3280 - Thank you. @BBigford - Curious.
@Robert5205 - My initial estimation for the request should run $30-40k. However, just trying to look for low options as well (for the client's sake). I will offer tiered options.
Thanks for your recommendations. Xavierperry wrote: Good feedback from everyone so far. I really appreciate the help. @toby wells - The client wll be using the Adobe suite for editing, and probably DaVinci Resolve as well. Not sure what else at this point. Ah - so you know Premiere files are network aware so unless you are using the (still beta) Teams collaboration tool Then you risk file corruption The alternatives are something like SNS with their shared storage This project sounds far closer to the model we would use in the advertising industry where small shops have local/fast/ SSD drives for ingest and editing then archive onto second tier network storage. With the budget you have you cant afford much more than this.
A few 500GB SSD drives for the edit stations and a big NAS is all you can really get anyway so forget scrubbing and even preview renders on the fly it just wont work Tell them to up the budget, this isnt just a file storage project video workflow is very different. Agree with others.
Sit down with client and get more details on this. Are they planning to have teams of people editing video files daily?
Would only 10 TB of that 150 TB be used by people at any given time? Or is all in use, all the time?
What sort of video? 4k/8k, lots of people, that 1GbE is not going to be enough. What performance do they actually expect here?
You should charge for your time to get these options. Once options given, they can go with you, or with somebody else. But, don't deliver the goods (the options) for free as the clients could go elsewhere and have it built based on what you suggest - and you end up with nothing. Charge for your time here as a consultant.
Any project to build it is then it own thing. If you can get away with a large box for 'long term storage/archive', and a smaller box for 'live production files', that would help. Editors should move the project from archive to production to work on it, once done, archive again.
(Just my initial thoughts of course).
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |